00:00:00.001 Started by upstream project "autotest-nightly-lts" build number 1822 00:00:00.001 originally caused by: 00:00:00.001 Started by upstream project "nightly-trigger" build number 3083 00:00:00.001 originally caused by: 00:00:00.001 Started by timer 00:00:00.016 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/dsa-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-phy.groovy 00:00:00.016 The recommended git tool is: git 00:00:00.017 using credential 00000000-0000-0000-0000-000000000002 00:00:00.018 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/dsa-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.030 Fetching changes from the remote Git repository 00:00:00.032 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.046 Using shallow fetch with depth 1 00:00:00.046 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.046 > git --version # timeout=10 00:00:00.069 > git --version # 'git version 2.39.2' 00:00:00.069 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.070 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.070 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:03.251 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:03.263 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:03.277 Checking out Revision f620ee97e10840540f53609861ee9b86caa3c192 (FETCH_HEAD) 00:00:03.277 > git config core.sparsecheckout # timeout=10 00:00:03.288 > git read-tree -mu HEAD # timeout=10 00:00:03.304 > git checkout -f f620ee97e10840540f53609861ee9b86caa3c192 # timeout=5 00:00:03.325 Commit message: "change IP of vertiv1 PDU" 00:00:03.325 > git rev-list --no-walk f620ee97e10840540f53609861ee9b86caa3c192 # timeout=10 00:00:03.429 [Pipeline] Start of Pipeline 00:00:03.441 [Pipeline] library 00:00:03.443 Loading library shm_lib@master 00:00:03.443 Library shm_lib@master is cached. Copying from home. 00:00:03.462 [Pipeline] node 00:00:03.468 Running on FCP07 in /var/jenkins/workspace/dsa-phy-autotest 00:00:03.472 [Pipeline] { 00:00:03.483 [Pipeline] catchError 00:00:03.485 [Pipeline] { 00:00:03.495 [Pipeline] wrap 00:00:03.503 [Pipeline] { 00:00:03.509 [Pipeline] stage 00:00:03.511 [Pipeline] { (Prologue) 00:00:03.706 [Pipeline] sh 00:00:03.990 + logger -p user.info -t JENKINS-CI 00:00:04.007 [Pipeline] echo 00:00:04.008 Node: FCP07 00:00:04.014 [Pipeline] sh 00:00:04.315 [Pipeline] setCustomBuildProperty 00:00:04.327 [Pipeline] echo 00:00:04.329 Cleanup processes 00:00:04.334 [Pipeline] sh 00:00:04.617 + sudo pgrep -af /var/jenkins/workspace/dsa-phy-autotest/spdk 00:00:04.617 3689416 sudo pgrep -af /var/jenkins/workspace/dsa-phy-autotest/spdk 00:00:04.628 [Pipeline] sh 00:00:04.913 ++ sudo pgrep -af /var/jenkins/workspace/dsa-phy-autotest/spdk 00:00:04.913 ++ grep -v 'sudo pgrep' 00:00:04.913 ++ awk '{print $1}' 00:00:04.913 + sudo kill -9 00:00:04.913 + true 00:00:04.925 [Pipeline] cleanWs 00:00:04.934 [WS-CLEANUP] Deleting project workspace... 00:00:04.934 [WS-CLEANUP] Deferred wipeout is used... 00:00:04.940 [WS-CLEANUP] done 00:00:04.943 [Pipeline] setCustomBuildProperty 00:00:04.953 [Pipeline] sh 00:00:05.233 + sudo git config --global --replace-all safe.directory '*' 00:00:05.291 [Pipeline] nodesByLabel 00:00:05.292 Found a total of 1 nodes with the 'sorcerer' label 00:00:05.300 [Pipeline] httpRequest 00:00:05.305 HttpMethod: GET 00:00:05.305 URL: http://10.211.164.101/packages/jbp_f620ee97e10840540f53609861ee9b86caa3c192.tar.gz 00:00:05.311 Sending request to url: http://10.211.164.101/packages/jbp_f620ee97e10840540f53609861ee9b86caa3c192.tar.gz 00:00:05.315 Response Code: HTTP/1.1 200 OK 00:00:05.315 Success: Status code 200 is in the accepted range: 200,404 00:00:05.315 Saving response body to /var/jenkins/workspace/dsa-phy-autotest/jbp_f620ee97e10840540f53609861ee9b86caa3c192.tar.gz 00:00:06.053 [Pipeline] sh 00:00:06.341 + tar --no-same-owner -xf jbp_f620ee97e10840540f53609861ee9b86caa3c192.tar.gz 00:00:06.357 [Pipeline] httpRequest 00:00:06.361 HttpMethod: GET 00:00:06.362 URL: http://10.211.164.101/packages/spdk_36faa8c312bf9059b86e0f503d7fd6b43c1498e6.tar.gz 00:00:06.363 Sending request to url: http://10.211.164.101/packages/spdk_36faa8c312bf9059b86e0f503d7fd6b43c1498e6.tar.gz 00:00:06.366 Response Code: HTTP/1.1 200 OK 00:00:06.366 Success: Status code 200 is in the accepted range: 200,404 00:00:06.367 Saving response body to /var/jenkins/workspace/dsa-phy-autotest/spdk_36faa8c312bf9059b86e0f503d7fd6b43c1498e6.tar.gz 00:00:42.075 [Pipeline] sh 00:00:42.362 + tar --no-same-owner -xf spdk_36faa8c312bf9059b86e0f503d7fd6b43c1498e6.tar.gz 00:00:44.917 [Pipeline] sh 00:00:45.198 + git -C spdk log --oneline -n5 00:00:45.198 36faa8c31 bdev/nvme: Fix the case that namespace was removed during reset 00:00:45.198 e2cb5a5ee bdev/nvme: Factor out nvme_ns active/inactive check into a helper function 00:00:45.198 4b134b4ab bdev/nvme: Delay callbacks when the next operation is a failover 00:00:45.198 d2ea4ecb1 llvm/vfio: Suppress checking leaks for `spdk_nvme_ctrlr_alloc_io_qpair` 00:00:45.198 3b33f4333 test/nvme/cuse: Fix typo 00:00:45.210 [Pipeline] } 00:00:45.227 [Pipeline] // stage 00:00:45.235 [Pipeline] stage 00:00:45.237 [Pipeline] { (Prepare) 00:00:45.254 [Pipeline] writeFile 00:00:45.271 [Pipeline] sh 00:00:45.553 + logger -p user.info -t JENKINS-CI 00:00:45.565 [Pipeline] sh 00:00:45.847 + logger -p user.info -t JENKINS-CI 00:00:45.860 [Pipeline] sh 00:00:46.184 + cat autorun-spdk.conf 00:00:46.184 SPDK_RUN_FUNCTIONAL_TEST=1 00:00:46.184 SPDK_TEST_ACCEL_DSA=1 00:00:46.184 SPDK_TEST_ACCEL_IAA=1 00:00:46.184 SPDK_TEST_NVMF=1 00:00:46.184 SPDK_TEST_NVMF_TRANSPORT=tcp 00:00:46.184 SPDK_RUN_ASAN=1 00:00:46.184 SPDK_RUN_UBSAN=1 00:00:46.192 RUN_NIGHTLY=1 00:00:46.197 [Pipeline] readFile 00:00:46.220 [Pipeline] withEnv 00:00:46.222 [Pipeline] { 00:00:46.237 [Pipeline] sh 00:00:46.518 + set -ex 00:00:46.518 + [[ -f /var/jenkins/workspace/dsa-phy-autotest/autorun-spdk.conf ]] 00:00:46.518 + source /var/jenkins/workspace/dsa-phy-autotest/autorun-spdk.conf 00:00:46.518 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:00:46.518 ++ SPDK_TEST_ACCEL_DSA=1 00:00:46.518 ++ SPDK_TEST_ACCEL_IAA=1 00:00:46.518 ++ SPDK_TEST_NVMF=1 00:00:46.518 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:00:46.518 ++ SPDK_RUN_ASAN=1 00:00:46.518 ++ SPDK_RUN_UBSAN=1 00:00:46.518 ++ RUN_NIGHTLY=1 00:00:46.518 + case $SPDK_TEST_NVMF_NICS in 00:00:46.518 + DRIVERS= 00:00:46.518 + [[ -n '' ]] 00:00:46.518 + exit 0 00:00:46.528 [Pipeline] } 00:00:46.547 [Pipeline] // withEnv 00:00:46.554 [Pipeline] } 00:00:46.569 [Pipeline] // stage 00:00:46.578 [Pipeline] catchError 00:00:46.579 [Pipeline] { 00:00:46.590 [Pipeline] timeout 00:00:46.590 Timeout set to expire in 50 min 00:00:46.592 [Pipeline] { 00:00:46.603 [Pipeline] stage 00:00:46.605 [Pipeline] { (Tests) 00:00:46.614 [Pipeline] sh 00:00:46.893 + jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh /var/jenkins/workspace/dsa-phy-autotest 00:00:46.893 ++ readlink -f /var/jenkins/workspace/dsa-phy-autotest 00:00:46.893 + DIR_ROOT=/var/jenkins/workspace/dsa-phy-autotest 00:00:46.893 + [[ -n /var/jenkins/workspace/dsa-phy-autotest ]] 00:00:46.893 + DIR_SPDK=/var/jenkins/workspace/dsa-phy-autotest/spdk 00:00:46.893 + DIR_OUTPUT=/var/jenkins/workspace/dsa-phy-autotest/output 00:00:46.893 + [[ -d /var/jenkins/workspace/dsa-phy-autotest/spdk ]] 00:00:46.893 + [[ ! -d /var/jenkins/workspace/dsa-phy-autotest/output ]] 00:00:46.893 + mkdir -p /var/jenkins/workspace/dsa-phy-autotest/output 00:00:46.893 + [[ -d /var/jenkins/workspace/dsa-phy-autotest/output ]] 00:00:46.893 + cd /var/jenkins/workspace/dsa-phy-autotest 00:00:46.893 + source /etc/os-release 00:00:46.893 ++ NAME='Fedora Linux' 00:00:46.893 ++ VERSION='38 (Cloud Edition)' 00:00:46.893 ++ ID=fedora 00:00:46.893 ++ VERSION_ID=38 00:00:46.893 ++ VERSION_CODENAME= 00:00:46.893 ++ PLATFORM_ID=platform:f38 00:00:46.893 ++ PRETTY_NAME='Fedora Linux 38 (Cloud Edition)' 00:00:46.893 ++ ANSI_COLOR='0;38;2;60;110;180' 00:00:46.893 ++ LOGO=fedora-logo-icon 00:00:46.893 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:38 00:00:46.893 ++ HOME_URL=https://fedoraproject.org/ 00:00:46.893 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f38/system-administrators-guide/ 00:00:46.893 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:00:46.893 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:00:46.893 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:00:46.893 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=38 00:00:46.893 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:00:46.893 ++ REDHAT_SUPPORT_PRODUCT_VERSION=38 00:00:46.893 ++ SUPPORT_END=2024-05-14 00:00:46.893 ++ VARIANT='Cloud Edition' 00:00:46.893 ++ VARIANT_ID=cloud 00:00:46.893 + uname -a 00:00:46.893 Linux spdk-fcp-07 6.7.0-68.fc38.x86_64 #1 SMP PREEMPT_DYNAMIC Mon Jan 15 00:59:40 UTC 2024 x86_64 GNU/Linux 00:00:46.893 + sudo /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/setup.sh status 00:00:49.435 Hugepages 00:00:49.435 node hugesize free / total 00:00:49.435 node0 1048576kB 0 / 0 00:00:49.435 node0 2048kB 0 / 0 00:00:49.435 node1 1048576kB 0 / 0 00:00:49.435 node1 2048kB 0 / 0 00:00:49.435 00:00:49.435 Type BDF Vendor Device NUMA Driver Device Block devices 00:00:49.435 DSA 0000:6a:01.0 8086 0b25 0 idxd - - 00:00:49.435 IAA 0000:6a:02.0 8086 0cfe 0 idxd - - 00:00:49.435 DSA 0000:6f:01.0 8086 0b25 0 idxd - - 00:00:49.435 IAA 0000:6f:02.0 8086 0cfe 0 idxd - - 00:00:49.435 DSA 0000:74:01.0 8086 0b25 0 idxd - - 00:00:49.435 IAA 0000:74:02.0 8086 0cfe 0 idxd - - 00:00:49.435 DSA 0000:79:01.0 8086 0b25 0 idxd - - 00:00:49.435 IAA 0000:79:02.0 8086 0cfe 0 idxd - - 00:00:49.435 NVMe 0000:c9:00.0 8086 0a54 1 nvme nvme0 nvme0n1 00:00:49.435 NVMe 0000:ca:00.0 8086 0a54 1 nvme nvme1 nvme1n1 00:00:49.435 DSA 0000:e7:01.0 8086 0b25 1 idxd - - 00:00:49.435 IAA 0000:e7:02.0 8086 0cfe 1 idxd - - 00:00:49.435 DSA 0000:ec:01.0 8086 0b25 1 idxd - - 00:00:49.435 IAA 0000:ec:02.0 8086 0cfe 1 idxd - - 00:00:49.435 DSA 0000:f1:01.0 8086 0b25 1 idxd - - 00:00:49.435 IAA 0000:f1:02.0 8086 0cfe 1 idxd - - 00:00:49.435 DSA 0000:f6:01.0 8086 0b25 1 idxd - - 00:00:49.435 IAA 0000:f6:02.0 8086 0cfe 1 idxd - - 00:00:49.435 + rm -f /tmp/spdk-ld-path 00:00:49.435 + source autorun-spdk.conf 00:00:49.435 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:00:49.435 ++ SPDK_TEST_ACCEL_DSA=1 00:00:49.435 ++ SPDK_TEST_ACCEL_IAA=1 00:00:49.435 ++ SPDK_TEST_NVMF=1 00:00:49.435 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:00:49.435 ++ SPDK_RUN_ASAN=1 00:00:49.435 ++ SPDK_RUN_UBSAN=1 00:00:49.435 ++ RUN_NIGHTLY=1 00:00:49.435 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:00:49.435 + [[ -n '' ]] 00:00:49.435 + sudo git config --global --add safe.directory /var/jenkins/workspace/dsa-phy-autotest/spdk 00:00:49.435 + for M in /var/spdk/build-*-manifest.txt 00:00:49.435 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:00:49.435 + cp /var/spdk/build-pkg-manifest.txt /var/jenkins/workspace/dsa-phy-autotest/output/ 00:00:49.435 + for M in /var/spdk/build-*-manifest.txt 00:00:49.435 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:00:49.435 + cp /var/spdk/build-repo-manifest.txt /var/jenkins/workspace/dsa-phy-autotest/output/ 00:00:49.435 ++ uname 00:00:49.435 + [[ Linux == \L\i\n\u\x ]] 00:00:49.435 + sudo dmesg -T 00:00:49.435 + sudo dmesg --clear 00:00:49.435 + dmesg_pid=3690416 00:00:49.435 + [[ Fedora Linux == FreeBSD ]] 00:00:49.435 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:00:49.435 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:00:49.435 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:00:49.435 + [[ -x /usr/src/fio-static/fio ]] 00:00:49.435 + export FIO_BIN=/usr/src/fio-static/fio 00:00:49.435 + FIO_BIN=/usr/src/fio-static/fio 00:00:49.435 + [[ '' == \/\v\a\r\/\j\e\n\k\i\n\s\/\w\o\r\k\s\p\a\c\e\/\d\s\a\-\p\h\y\-\a\u\t\o\t\e\s\t\/\q\e\m\u\_\v\f\i\o\/* ]] 00:00:49.435 + [[ ! -v VFIO_QEMU_BIN ]] 00:00:49.435 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:00:49.435 + sudo dmesg -Tw 00:00:49.435 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:00:49.435 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:00:49.435 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:00:49.435 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:00:49.435 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:00:49.435 + spdk/autorun.sh /var/jenkins/workspace/dsa-phy-autotest/autorun-spdk.conf 00:00:49.435 Test configuration: 00:00:49.435 SPDK_RUN_FUNCTIONAL_TEST=1 00:00:49.435 SPDK_TEST_ACCEL_DSA=1 00:00:49.435 SPDK_TEST_ACCEL_IAA=1 00:00:49.435 SPDK_TEST_NVMF=1 00:00:49.435 SPDK_TEST_NVMF_TRANSPORT=tcp 00:00:49.435 SPDK_RUN_ASAN=1 00:00:49.435 SPDK_RUN_UBSAN=1 00:00:49.435 RUN_NIGHTLY=1 03:58:03 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/common.sh 00:00:49.435 03:58:03 -- scripts/common.sh@433 -- $ [[ -e /bin/wpdk_common.sh ]] 00:00:49.435 03:58:03 -- scripts/common.sh@441 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:00:49.435 03:58:03 -- scripts/common.sh@442 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:00:49.435 03:58:03 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:00:49.435 03:58:03 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:00:49.435 03:58:03 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:00:49.435 03:58:03 -- paths/export.sh@5 -- $ export PATH 00:00:49.435 03:58:03 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:00:49.435 03:58:03 -- common/autobuild_common.sh@434 -- $ out=/var/jenkins/workspace/dsa-phy-autotest/spdk/../output 00:00:49.435 03:58:03 -- common/autobuild_common.sh@435 -- $ date +%s 00:00:49.435 03:58:03 -- common/autobuild_common.sh@435 -- $ mktemp -dt spdk_1715651883.XXXXXX 00:00:49.435 03:58:03 -- common/autobuild_common.sh@435 -- $ SPDK_WORKSPACE=/tmp/spdk_1715651883.OWoHzn 00:00:49.435 03:58:03 -- common/autobuild_common.sh@437 -- $ [[ -n '' ]] 00:00:49.435 03:58:03 -- common/autobuild_common.sh@441 -- $ '[' -n '' ']' 00:00:49.435 03:58:03 -- common/autobuild_common.sh@444 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/dsa-phy-autotest/spdk/dpdk/' 00:00:49.435 03:58:03 -- common/autobuild_common.sh@448 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/dsa-phy-autotest/spdk/xnvme --exclude /tmp' 00:00:49.435 03:58:03 -- common/autobuild_common.sh@450 -- $ scanbuild='scan-build -o /var/jenkins/workspace/dsa-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/dsa-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/dsa-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:00:49.435 03:58:03 -- common/autobuild_common.sh@451 -- $ get_config_params 00:00:49.435 03:58:03 -- common/autotest_common.sh@387 -- $ xtrace_disable 00:00:49.435 03:58:03 -- common/autotest_common.sh@10 -- $ set +x 00:00:49.435 03:58:04 -- common/autobuild_common.sh@451 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk' 00:00:49.435 03:58:04 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:00:49.435 03:58:04 -- spdk/autobuild.sh@12 -- $ umask 022 00:00:49.435 03:58:04 -- spdk/autobuild.sh@13 -- $ cd /var/jenkins/workspace/dsa-phy-autotest/spdk 00:00:49.435 03:58:04 -- spdk/autobuild.sh@16 -- $ date -u 00:00:49.435 Tue May 14 01:58:04 AM UTC 2024 00:00:49.435 03:58:04 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:00:49.695 LTS-24-g36faa8c31 00:00:49.695 03:58:04 -- spdk/autobuild.sh@19 -- $ '[' 1 -eq 1 ']' 00:00:49.695 03:58:04 -- spdk/autobuild.sh@20 -- $ run_test asan echo 'using asan' 00:00:49.695 03:58:04 -- common/autotest_common.sh@1077 -- $ '[' 3 -le 1 ']' 00:00:49.695 03:58:04 -- common/autotest_common.sh@1083 -- $ xtrace_disable 00:00:49.695 03:58:04 -- common/autotest_common.sh@10 -- $ set +x 00:00:49.695 ************************************ 00:00:49.695 START TEST asan 00:00:49.695 ************************************ 00:00:49.695 03:58:04 -- common/autotest_common.sh@1104 -- $ echo 'using asan' 00:00:49.695 using asan 00:00:49.695 00:00:49.695 real 0m0.000s 00:00:49.695 user 0m0.000s 00:00:49.695 sys 0m0.000s 00:00:49.695 03:58:04 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:00:49.695 03:58:04 -- common/autotest_common.sh@10 -- $ set +x 00:00:49.696 ************************************ 00:00:49.696 END TEST asan 00:00:49.696 ************************************ 00:00:49.696 03:58:04 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:00:49.696 03:58:04 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:00:49.696 03:58:04 -- common/autotest_common.sh@1077 -- $ '[' 3 -le 1 ']' 00:00:49.696 03:58:04 -- common/autotest_common.sh@1083 -- $ xtrace_disable 00:00:49.696 03:58:04 -- common/autotest_common.sh@10 -- $ set +x 00:00:49.696 ************************************ 00:00:49.696 START TEST ubsan 00:00:49.696 ************************************ 00:00:49.696 03:58:04 -- common/autotest_common.sh@1104 -- $ echo 'using ubsan' 00:00:49.696 using ubsan 00:00:49.696 00:00:49.696 real 0m0.000s 00:00:49.696 user 0m0.000s 00:00:49.696 sys 0m0.000s 00:00:49.696 03:58:04 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:00:49.696 03:58:04 -- common/autotest_common.sh@10 -- $ set +x 00:00:49.696 ************************************ 00:00:49.696 END TEST ubsan 00:00:49.696 ************************************ 00:00:49.696 03:58:04 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:00:49.696 03:58:04 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:00:49.696 03:58:04 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:00:49.696 03:58:04 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:00:49.696 03:58:04 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:00:49.696 03:58:04 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:00:49.696 03:58:04 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:00:49.696 03:58:04 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:00:49.696 03:58:04 -- spdk/autobuild.sh@67 -- $ /var/jenkins/workspace/dsa-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-shared 00:00:49.696 Using default SPDK env in /var/jenkins/workspace/dsa-phy-autotest/spdk/lib/env_dpdk 00:00:49.696 Using default DPDK in /var/jenkins/workspace/dsa-phy-autotest/spdk/dpdk/build 00:00:49.956 Using 'verbs' RDMA provider 00:01:00.526 Configuring ISA-L (logfile: /var/jenkins/workspace/dsa-phy-autotest/spdk/isa-l/spdk-isal.log)...done. 00:01:12.758 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/dsa-phy-autotest/spdk/isa-l-crypto/spdk-isal-crypto.log)...done. 00:01:12.758 Creating mk/config.mk...done. 00:01:12.758 Creating mk/cc.flags.mk...done. 00:01:12.758 Type 'make' to build. 00:01:12.758 03:58:27 -- spdk/autobuild.sh@69 -- $ run_test make make -j128 00:01:12.758 03:58:27 -- common/autotest_common.sh@1077 -- $ '[' 3 -le 1 ']' 00:01:12.758 03:58:27 -- common/autotest_common.sh@1083 -- $ xtrace_disable 00:01:12.758 03:58:27 -- common/autotest_common.sh@10 -- $ set +x 00:01:12.758 ************************************ 00:01:12.758 START TEST make 00:01:12.758 ************************************ 00:01:12.758 03:58:27 -- common/autotest_common.sh@1104 -- $ make -j128 00:01:12.758 make[1]: Nothing to be done for 'all'. 00:01:18.032 The Meson build system 00:01:18.032 Version: 1.3.1 00:01:18.032 Source dir: /var/jenkins/workspace/dsa-phy-autotest/spdk/dpdk 00:01:18.032 Build dir: /var/jenkins/workspace/dsa-phy-autotest/spdk/dpdk/build-tmp 00:01:18.032 Build type: native build 00:01:18.032 Program cat found: YES (/usr/bin/cat) 00:01:18.032 Project name: DPDK 00:01:18.032 Project version: 23.11.0 00:01:18.032 C compiler for the host machine: cc (gcc 13.2.1 "cc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:01:18.032 C linker for the host machine: cc ld.bfd 2.39-16 00:01:18.032 Host machine cpu family: x86_64 00:01:18.032 Host machine cpu: x86_64 00:01:18.032 Message: ## Building in Developer Mode ## 00:01:18.032 Program pkg-config found: YES (/usr/bin/pkg-config) 00:01:18.032 Program check-symbols.sh found: YES (/var/jenkins/workspace/dsa-phy-autotest/spdk/dpdk/buildtools/check-symbols.sh) 00:01:18.032 Program options-ibverbs-static.sh found: YES (/var/jenkins/workspace/dsa-phy-autotest/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:01:18.032 Program python3 found: YES (/usr/bin/python3) 00:01:18.032 Program cat found: YES (/usr/bin/cat) 00:01:18.032 Compiler for C supports arguments -march=native: YES 00:01:18.032 Checking for size of "void *" : 8 00:01:18.032 Checking for size of "void *" : 8 (cached) 00:01:18.032 Library m found: YES 00:01:18.032 Library numa found: YES 00:01:18.032 Has header "numaif.h" : YES 00:01:18.032 Library fdt found: NO 00:01:18.032 Library execinfo found: NO 00:01:18.032 Has header "execinfo.h" : YES 00:01:18.032 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:01:18.032 Run-time dependency libarchive found: NO (tried pkgconfig) 00:01:18.032 Run-time dependency libbsd found: NO (tried pkgconfig) 00:01:18.032 Run-time dependency jansson found: NO (tried pkgconfig) 00:01:18.032 Run-time dependency openssl found: YES 3.0.9 00:01:18.032 Run-time dependency libpcap found: YES 1.10.4 00:01:18.032 Has header "pcap.h" with dependency libpcap: YES 00:01:18.032 Compiler for C supports arguments -Wcast-qual: YES 00:01:18.032 Compiler for C supports arguments -Wdeprecated: YES 00:01:18.032 Compiler for C supports arguments -Wformat: YES 00:01:18.032 Compiler for C supports arguments -Wformat-nonliteral: NO 00:01:18.032 Compiler for C supports arguments -Wformat-security: NO 00:01:18.032 Compiler for C supports arguments -Wmissing-declarations: YES 00:01:18.032 Compiler for C supports arguments -Wmissing-prototypes: YES 00:01:18.032 Compiler for C supports arguments -Wnested-externs: YES 00:01:18.032 Compiler for C supports arguments -Wold-style-definition: YES 00:01:18.032 Compiler for C supports arguments -Wpointer-arith: YES 00:01:18.032 Compiler for C supports arguments -Wsign-compare: YES 00:01:18.032 Compiler for C supports arguments -Wstrict-prototypes: YES 00:01:18.032 Compiler for C supports arguments -Wundef: YES 00:01:18.032 Compiler for C supports arguments -Wwrite-strings: YES 00:01:18.032 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:01:18.032 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:01:18.032 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:01:18.032 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:01:18.032 Program objdump found: YES (/usr/bin/objdump) 00:01:18.032 Compiler for C supports arguments -mavx512f: YES 00:01:18.032 Checking if "AVX512 checking" compiles: YES 00:01:18.032 Fetching value of define "__SSE4_2__" : 1 00:01:18.032 Fetching value of define "__AES__" : 1 00:01:18.032 Fetching value of define "__AVX__" : 1 00:01:18.032 Fetching value of define "__AVX2__" : 1 00:01:18.032 Fetching value of define "__AVX512BW__" : 1 00:01:18.032 Fetching value of define "__AVX512CD__" : 1 00:01:18.032 Fetching value of define "__AVX512DQ__" : 1 00:01:18.032 Fetching value of define "__AVX512F__" : 1 00:01:18.032 Fetching value of define "__AVX512VL__" : 1 00:01:18.032 Fetching value of define "__PCLMUL__" : 1 00:01:18.032 Fetching value of define "__RDRND__" : 1 00:01:18.032 Fetching value of define "__RDSEED__" : 1 00:01:18.032 Fetching value of define "__VPCLMULQDQ__" : 1 00:01:18.032 Fetching value of define "__znver1__" : (undefined) 00:01:18.032 Fetching value of define "__znver2__" : (undefined) 00:01:18.032 Fetching value of define "__znver3__" : (undefined) 00:01:18.032 Fetching value of define "__znver4__" : (undefined) 00:01:18.032 Library asan found: YES 00:01:18.032 Compiler for C supports arguments -Wno-format-truncation: YES 00:01:18.032 Message: lib/log: Defining dependency "log" 00:01:18.032 Message: lib/kvargs: Defining dependency "kvargs" 00:01:18.032 Message: lib/telemetry: Defining dependency "telemetry" 00:01:18.032 Library rt found: YES 00:01:18.032 Checking for function "getentropy" : NO 00:01:18.032 Message: lib/eal: Defining dependency "eal" 00:01:18.032 Message: lib/ring: Defining dependency "ring" 00:01:18.032 Message: lib/rcu: Defining dependency "rcu" 00:01:18.032 Message: lib/mempool: Defining dependency "mempool" 00:01:18.032 Message: lib/mbuf: Defining dependency "mbuf" 00:01:18.032 Fetching value of define "__PCLMUL__" : 1 (cached) 00:01:18.032 Fetching value of define "__AVX512F__" : 1 (cached) 00:01:18.032 Fetching value of define "__AVX512BW__" : 1 (cached) 00:01:18.032 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:01:18.032 Fetching value of define "__AVX512VL__" : 1 (cached) 00:01:18.032 Fetching value of define "__VPCLMULQDQ__" : 1 (cached) 00:01:18.032 Compiler for C supports arguments -mpclmul: YES 00:01:18.032 Compiler for C supports arguments -maes: YES 00:01:18.032 Compiler for C supports arguments -mavx512f: YES (cached) 00:01:18.032 Compiler for C supports arguments -mavx512bw: YES 00:01:18.032 Compiler for C supports arguments -mavx512dq: YES 00:01:18.032 Compiler for C supports arguments -mavx512vl: YES 00:01:18.032 Compiler for C supports arguments -mvpclmulqdq: YES 00:01:18.032 Compiler for C supports arguments -mavx2: YES 00:01:18.032 Compiler for C supports arguments -mavx: YES 00:01:18.032 Message: lib/net: Defining dependency "net" 00:01:18.032 Message: lib/meter: Defining dependency "meter" 00:01:18.032 Message: lib/ethdev: Defining dependency "ethdev" 00:01:18.032 Message: lib/pci: Defining dependency "pci" 00:01:18.033 Message: lib/cmdline: Defining dependency "cmdline" 00:01:18.033 Message: lib/hash: Defining dependency "hash" 00:01:18.033 Message: lib/timer: Defining dependency "timer" 00:01:18.033 Message: lib/compressdev: Defining dependency "compressdev" 00:01:18.033 Message: lib/cryptodev: Defining dependency "cryptodev" 00:01:18.033 Message: lib/dmadev: Defining dependency "dmadev" 00:01:18.033 Compiler for C supports arguments -Wno-cast-qual: YES 00:01:18.033 Message: lib/power: Defining dependency "power" 00:01:18.033 Message: lib/reorder: Defining dependency "reorder" 00:01:18.033 Message: lib/security: Defining dependency "security" 00:01:18.033 Has header "linux/userfaultfd.h" : YES 00:01:18.033 Has header "linux/vduse.h" : YES 00:01:18.033 Message: lib/vhost: Defining dependency "vhost" 00:01:18.033 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:01:18.033 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:01:18.033 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:01:18.033 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:01:18.033 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:01:18.033 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:01:18.033 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:01:18.033 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:01:18.033 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:01:18.033 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:01:18.033 Program doxygen found: YES (/usr/bin/doxygen) 00:01:18.033 Configuring doxy-api-html.conf using configuration 00:01:18.033 Configuring doxy-api-man.conf using configuration 00:01:18.033 Program mandb found: YES (/usr/bin/mandb) 00:01:18.033 Program sphinx-build found: NO 00:01:18.033 Configuring rte_build_config.h using configuration 00:01:18.033 Message: 00:01:18.033 ================= 00:01:18.033 Applications Enabled 00:01:18.033 ================= 00:01:18.033 00:01:18.033 apps: 00:01:18.033 00:01:18.033 00:01:18.033 Message: 00:01:18.033 ================= 00:01:18.033 Libraries Enabled 00:01:18.033 ================= 00:01:18.033 00:01:18.033 libs: 00:01:18.033 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:01:18.033 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:01:18.033 cryptodev, dmadev, power, reorder, security, vhost, 00:01:18.033 00:01:18.033 Message: 00:01:18.033 =============== 00:01:18.033 Drivers Enabled 00:01:18.033 =============== 00:01:18.033 00:01:18.033 common: 00:01:18.033 00:01:18.033 bus: 00:01:18.033 pci, vdev, 00:01:18.033 mempool: 00:01:18.033 ring, 00:01:18.033 dma: 00:01:18.033 00:01:18.033 net: 00:01:18.033 00:01:18.033 crypto: 00:01:18.033 00:01:18.033 compress: 00:01:18.033 00:01:18.033 vdpa: 00:01:18.033 00:01:18.033 00:01:18.033 Message: 00:01:18.033 ================= 00:01:18.033 Content Skipped 00:01:18.033 ================= 00:01:18.033 00:01:18.033 apps: 00:01:18.033 dumpcap: explicitly disabled via build config 00:01:18.033 graph: explicitly disabled via build config 00:01:18.033 pdump: explicitly disabled via build config 00:01:18.033 proc-info: explicitly disabled via build config 00:01:18.033 test-acl: explicitly disabled via build config 00:01:18.033 test-bbdev: explicitly disabled via build config 00:01:18.033 test-cmdline: explicitly disabled via build config 00:01:18.033 test-compress-perf: explicitly disabled via build config 00:01:18.033 test-crypto-perf: explicitly disabled via build config 00:01:18.033 test-dma-perf: explicitly disabled via build config 00:01:18.033 test-eventdev: explicitly disabled via build config 00:01:18.033 test-fib: explicitly disabled via build config 00:01:18.033 test-flow-perf: explicitly disabled via build config 00:01:18.033 test-gpudev: explicitly disabled via build config 00:01:18.033 test-mldev: explicitly disabled via build config 00:01:18.033 test-pipeline: explicitly disabled via build config 00:01:18.033 test-pmd: explicitly disabled via build config 00:01:18.033 test-regex: explicitly disabled via build config 00:01:18.033 test-sad: explicitly disabled via build config 00:01:18.033 test-security-perf: explicitly disabled via build config 00:01:18.033 00:01:18.033 libs: 00:01:18.033 metrics: explicitly disabled via build config 00:01:18.033 acl: explicitly disabled via build config 00:01:18.033 bbdev: explicitly disabled via build config 00:01:18.033 bitratestats: explicitly disabled via build config 00:01:18.033 bpf: explicitly disabled via build config 00:01:18.033 cfgfile: explicitly disabled via build config 00:01:18.033 distributor: explicitly disabled via build config 00:01:18.033 efd: explicitly disabled via build config 00:01:18.033 eventdev: explicitly disabled via build config 00:01:18.033 dispatcher: explicitly disabled via build config 00:01:18.033 gpudev: explicitly disabled via build config 00:01:18.033 gro: explicitly disabled via build config 00:01:18.033 gso: explicitly disabled via build config 00:01:18.033 ip_frag: explicitly disabled via build config 00:01:18.033 jobstats: explicitly disabled via build config 00:01:18.033 latencystats: explicitly disabled via build config 00:01:18.033 lpm: explicitly disabled via build config 00:01:18.033 member: explicitly disabled via build config 00:01:18.033 pcapng: explicitly disabled via build config 00:01:18.033 rawdev: explicitly disabled via build config 00:01:18.033 regexdev: explicitly disabled via build config 00:01:18.033 mldev: explicitly disabled via build config 00:01:18.033 rib: explicitly disabled via build config 00:01:18.033 sched: explicitly disabled via build config 00:01:18.033 stack: explicitly disabled via build config 00:01:18.033 ipsec: explicitly disabled via build config 00:01:18.033 pdcp: explicitly disabled via build config 00:01:18.033 fib: explicitly disabled via build config 00:01:18.033 port: explicitly disabled via build config 00:01:18.033 pdump: explicitly disabled via build config 00:01:18.033 table: explicitly disabled via build config 00:01:18.033 pipeline: explicitly disabled via build config 00:01:18.033 graph: explicitly disabled via build config 00:01:18.033 node: explicitly disabled via build config 00:01:18.033 00:01:18.033 drivers: 00:01:18.033 common/cpt: not in enabled drivers build config 00:01:18.033 common/dpaax: not in enabled drivers build config 00:01:18.033 common/iavf: not in enabled drivers build config 00:01:18.033 common/idpf: not in enabled drivers build config 00:01:18.033 common/mvep: not in enabled drivers build config 00:01:18.033 common/octeontx: not in enabled drivers build config 00:01:18.033 bus/auxiliary: not in enabled drivers build config 00:01:18.033 bus/cdx: not in enabled drivers build config 00:01:18.033 bus/dpaa: not in enabled drivers build config 00:01:18.033 bus/fslmc: not in enabled drivers build config 00:01:18.033 bus/ifpga: not in enabled drivers build config 00:01:18.033 bus/platform: not in enabled drivers build config 00:01:18.033 bus/vmbus: not in enabled drivers build config 00:01:18.033 common/cnxk: not in enabled drivers build config 00:01:18.033 common/mlx5: not in enabled drivers build config 00:01:18.033 common/nfp: not in enabled drivers build config 00:01:18.033 common/qat: not in enabled drivers build config 00:01:18.033 common/sfc_efx: not in enabled drivers build config 00:01:18.033 mempool/bucket: not in enabled drivers build config 00:01:18.033 mempool/cnxk: not in enabled drivers build config 00:01:18.033 mempool/dpaa: not in enabled drivers build config 00:01:18.033 mempool/dpaa2: not in enabled drivers build config 00:01:18.033 mempool/octeontx: not in enabled drivers build config 00:01:18.033 mempool/stack: not in enabled drivers build config 00:01:18.033 dma/cnxk: not in enabled drivers build config 00:01:18.033 dma/dpaa: not in enabled drivers build config 00:01:18.033 dma/dpaa2: not in enabled drivers build config 00:01:18.033 dma/hisilicon: not in enabled drivers build config 00:01:18.033 dma/idxd: not in enabled drivers build config 00:01:18.033 dma/ioat: not in enabled drivers build config 00:01:18.033 dma/skeleton: not in enabled drivers build config 00:01:18.033 net/af_packet: not in enabled drivers build config 00:01:18.033 net/af_xdp: not in enabled drivers build config 00:01:18.033 net/ark: not in enabled drivers build config 00:01:18.033 net/atlantic: not in enabled drivers build config 00:01:18.033 net/avp: not in enabled drivers build config 00:01:18.033 net/axgbe: not in enabled drivers build config 00:01:18.033 net/bnx2x: not in enabled drivers build config 00:01:18.033 net/bnxt: not in enabled drivers build config 00:01:18.033 net/bonding: not in enabled drivers build config 00:01:18.033 net/cnxk: not in enabled drivers build config 00:01:18.033 net/cpfl: not in enabled drivers build config 00:01:18.033 net/cxgbe: not in enabled drivers build config 00:01:18.033 net/dpaa: not in enabled drivers build config 00:01:18.033 net/dpaa2: not in enabled drivers build config 00:01:18.033 net/e1000: not in enabled drivers build config 00:01:18.033 net/ena: not in enabled drivers build config 00:01:18.033 net/enetc: not in enabled drivers build config 00:01:18.033 net/enetfec: not in enabled drivers build config 00:01:18.033 net/enic: not in enabled drivers build config 00:01:18.033 net/failsafe: not in enabled drivers build config 00:01:18.033 net/fm10k: not in enabled drivers build config 00:01:18.033 net/gve: not in enabled drivers build config 00:01:18.033 net/hinic: not in enabled drivers build config 00:01:18.033 net/hns3: not in enabled drivers build config 00:01:18.033 net/i40e: not in enabled drivers build config 00:01:18.033 net/iavf: not in enabled drivers build config 00:01:18.033 net/ice: not in enabled drivers build config 00:01:18.033 net/idpf: not in enabled drivers build config 00:01:18.033 net/igc: not in enabled drivers build config 00:01:18.033 net/ionic: not in enabled drivers build config 00:01:18.033 net/ipn3ke: not in enabled drivers build config 00:01:18.033 net/ixgbe: not in enabled drivers build config 00:01:18.033 net/mana: not in enabled drivers build config 00:01:18.033 net/memif: not in enabled drivers build config 00:01:18.033 net/mlx4: not in enabled drivers build config 00:01:18.033 net/mlx5: not in enabled drivers build config 00:01:18.033 net/mvneta: not in enabled drivers build config 00:01:18.033 net/mvpp2: not in enabled drivers build config 00:01:18.033 net/netvsc: not in enabled drivers build config 00:01:18.033 net/nfb: not in enabled drivers build config 00:01:18.033 net/nfp: not in enabled drivers build config 00:01:18.033 net/ngbe: not in enabled drivers build config 00:01:18.033 net/null: not in enabled drivers build config 00:01:18.034 net/octeontx: not in enabled drivers build config 00:01:18.034 net/octeon_ep: not in enabled drivers build config 00:01:18.034 net/pcap: not in enabled drivers build config 00:01:18.034 net/pfe: not in enabled drivers build config 00:01:18.034 net/qede: not in enabled drivers build config 00:01:18.034 net/ring: not in enabled drivers build config 00:01:18.034 net/sfc: not in enabled drivers build config 00:01:18.034 net/softnic: not in enabled drivers build config 00:01:18.034 net/tap: not in enabled drivers build config 00:01:18.034 net/thunderx: not in enabled drivers build config 00:01:18.034 net/txgbe: not in enabled drivers build config 00:01:18.034 net/vdev_netvsc: not in enabled drivers build config 00:01:18.034 net/vhost: not in enabled drivers build config 00:01:18.034 net/virtio: not in enabled drivers build config 00:01:18.034 net/vmxnet3: not in enabled drivers build config 00:01:18.034 raw/*: missing internal dependency, "rawdev" 00:01:18.034 crypto/armv8: not in enabled drivers build config 00:01:18.034 crypto/bcmfs: not in enabled drivers build config 00:01:18.034 crypto/caam_jr: not in enabled drivers build config 00:01:18.034 crypto/ccp: not in enabled drivers build config 00:01:18.034 crypto/cnxk: not in enabled drivers build config 00:01:18.034 crypto/dpaa_sec: not in enabled drivers build config 00:01:18.034 crypto/dpaa2_sec: not in enabled drivers build config 00:01:18.034 crypto/ipsec_mb: not in enabled drivers build config 00:01:18.034 crypto/mlx5: not in enabled drivers build config 00:01:18.034 crypto/mvsam: not in enabled drivers build config 00:01:18.034 crypto/nitrox: not in enabled drivers build config 00:01:18.034 crypto/null: not in enabled drivers build config 00:01:18.034 crypto/octeontx: not in enabled drivers build config 00:01:18.034 crypto/openssl: not in enabled drivers build config 00:01:18.034 crypto/scheduler: not in enabled drivers build config 00:01:18.034 crypto/uadk: not in enabled drivers build config 00:01:18.034 crypto/virtio: not in enabled drivers build config 00:01:18.034 compress/isal: not in enabled drivers build config 00:01:18.034 compress/mlx5: not in enabled drivers build config 00:01:18.034 compress/octeontx: not in enabled drivers build config 00:01:18.034 compress/zlib: not in enabled drivers build config 00:01:18.034 regex/*: missing internal dependency, "regexdev" 00:01:18.034 ml/*: missing internal dependency, "mldev" 00:01:18.034 vdpa/ifc: not in enabled drivers build config 00:01:18.034 vdpa/mlx5: not in enabled drivers build config 00:01:18.034 vdpa/nfp: not in enabled drivers build config 00:01:18.034 vdpa/sfc: not in enabled drivers build config 00:01:18.034 event/*: missing internal dependency, "eventdev" 00:01:18.034 baseband/*: missing internal dependency, "bbdev" 00:01:18.034 gpu/*: missing internal dependency, "gpudev" 00:01:18.034 00:01:18.034 00:01:18.034 Build targets in project: 84 00:01:18.034 00:01:18.034 DPDK 23.11.0 00:01:18.034 00:01:18.034 User defined options 00:01:18.034 buildtype : debug 00:01:18.034 default_library : shared 00:01:18.034 libdir : lib 00:01:18.034 prefix : /var/jenkins/workspace/dsa-phy-autotest/spdk/dpdk/build 00:01:18.034 b_sanitize : address 00:01:18.034 c_args : -fPIC -Werror -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds 00:01:18.034 c_link_args : 00:01:18.034 cpu_instruction_set: native 00:01:18.034 disable_apps : test-sad,graph,test-regex,dumpcap,test-eventdev,test-compress-perf,pdump,test-security-perf,test-pmd,test-flow-perf,test-pipeline,test-crypto-perf,test-gpudev,test-cmdline,test-dma-perf,proc-info,test-bbdev,test-acl,test,test-mldev,test-fib 00:01:18.034 disable_libs : sched,port,dispatcher,graph,rawdev,pdcp,bitratestats,ipsec,pcapng,pdump,gso,cfgfile,gpudev,ip_frag,node,distributor,mldev,lpm,acl,bpf,latencystats,eventdev,regexdev,gro,stack,fib,pipeline,bbdev,table,metrics,member,jobstats,efd,rib 00:01:18.034 enable_docs : false 00:01:18.034 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:01:18.034 enable_kmods : false 00:01:18.034 tests : false 00:01:18.034 00:01:18.034 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:01:18.609 ninja: Entering directory `/var/jenkins/workspace/dsa-phy-autotest/spdk/dpdk/build-tmp' 00:01:18.609 [1/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:01:18.609 [2/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:01:18.609 [3/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:01:18.609 [4/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:01:18.609 [5/264] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:01:18.609 [6/264] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:01:18.609 [7/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:01:18.609 [8/264] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:01:18.609 [9/264] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:01:18.868 [10/264] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:01:18.868 [11/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:01:18.868 [12/264] Linking static target lib/librte_kvargs.a 00:01:18.868 [13/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:01:18.868 [14/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:01:18.868 [15/264] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:01:18.868 [16/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:01:18.868 [17/264] Compiling C object lib/librte_log.a.p/log_log.c.o 00:01:18.868 [18/264] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:01:18.868 [19/264] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:01:18.868 [20/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:01:18.868 [21/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:01:18.869 [22/264] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:01:18.869 [23/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:01:18.869 [24/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:01:18.869 [25/264] Linking static target lib/librte_pci.a 00:01:18.869 [26/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:01:18.869 [27/264] Linking static target lib/librte_log.a 00:01:18.869 [28/264] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:01:18.869 [29/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:01:18.869 [30/264] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:01:18.869 [31/264] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:01:18.869 [32/264] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:01:18.869 [33/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:01:19.129 [34/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:01:19.129 [35/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:01:19.129 [36/264] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:01:19.129 [37/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:01:19.129 [38/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:01:19.129 [39/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:01:19.129 [40/264] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:01:19.129 [41/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:01:19.129 [42/264] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:01:19.129 [43/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:01:19.129 [44/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:01:19.129 [45/264] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:01:19.129 [46/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:01:19.129 [47/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:01:19.129 [48/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:01:19.129 [49/264] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:01:19.129 [50/264] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:01:19.129 [51/264] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:01:19.129 [52/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:01:19.129 [53/264] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:01:19.129 [54/264] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:01:19.129 [55/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:01:19.129 [56/264] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:01:19.129 [57/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:01:19.129 [58/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:01:19.129 [59/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:01:19.129 [60/264] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:01:19.129 [61/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:01:19.129 [62/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:01:19.129 [63/264] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:01:19.129 [64/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:01:19.129 [65/264] Compiling C object lib/librte_net.a.p/net_net_crc_avx512.c.o 00:01:19.129 [66/264] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:01:19.129 [67/264] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:01:19.129 [68/264] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:01:19.129 [69/264] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:19.129 [70/264] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:01:19.129 [71/264] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:01:19.129 [72/264] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:01:19.129 [73/264] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:01:19.129 [74/264] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:01:19.129 [75/264] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:01:19.129 [76/264] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:01:19.129 [77/264] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:01:19.129 [78/264] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:01:19.388 [79/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:01:19.388 [80/264] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:01:19.388 [81/264] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:01:19.388 [82/264] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:01:19.388 [83/264] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:01:19.388 [84/264] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:01:19.388 [85/264] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:01:19.388 [86/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:01:19.388 [87/264] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:01:19.388 [88/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:01:19.388 [89/264] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:01:19.388 [90/264] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:01:19.388 [91/264] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:01:19.388 [92/264] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:01:19.388 [93/264] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:01:19.388 [94/264] Linking static target lib/librte_dmadev.a 00:01:19.388 [95/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:01:19.388 [96/264] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:01:19.388 [97/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:01:19.388 [98/264] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:01:19.388 [99/264] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:01:19.388 [100/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:01:19.388 [101/264] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:01:19.388 [102/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:01:19.388 [103/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:01:19.388 [104/264] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:01:19.388 [105/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:01:19.388 [106/264] Linking static target lib/librte_telemetry.a 00:01:19.388 [107/264] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:01:19.388 [108/264] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:01:19.388 [109/264] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:01:19.388 [110/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:01:19.388 [111/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:01:19.388 [112/264] Linking static target lib/librte_rcu.a 00:01:19.388 [113/264] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:01:19.388 [114/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:01:19.388 [115/264] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:01:19.388 [116/264] Linking static target lib/librte_meter.a 00:01:19.388 [117/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:01:19.388 [118/264] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:01:19.388 [119/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:01:19.388 [120/264] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:01:19.388 [121/264] Linking static target drivers/libtmp_rte_bus_vdev.a 00:01:19.388 [122/264] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:01:19.388 [123/264] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:01:19.388 [124/264] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:01:19.388 [125/264] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:01:19.388 [126/264] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:01:19.388 [127/264] Linking static target lib/librte_ring.a 00:01:19.388 [128/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:01:19.388 [129/264] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:01:19.388 [130/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:01:19.388 [131/264] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:01:19.388 [132/264] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:01:19.388 [133/264] Linking static target lib/librte_cmdline.a 00:01:19.388 [134/264] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:01:19.388 [135/264] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:01:19.388 [136/264] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:01:19.388 [137/264] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:01:19.388 [138/264] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:01:19.388 [139/264] Linking static target lib/librte_compressdev.a 00:01:19.388 [140/264] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:01:19.388 [141/264] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:01:19.645 [142/264] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:01:19.645 [143/264] Linking static target lib/librte_timer.a 00:01:19.645 [144/264] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:01:19.645 [145/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:01:19.645 [146/264] Linking target lib/librte_log.so.24.0 00:01:19.645 [147/264] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:01:19.645 [148/264] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:01:19.645 [149/264] Linking static target drivers/libtmp_rte_bus_pci.a 00:01:19.645 [150/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:01:19.645 [151/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:01:19.645 [152/264] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:01:19.645 [153/264] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:01:19.645 [154/264] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:01:19.645 [155/264] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:01:19.645 [156/264] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:01:19.645 [157/264] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:19.645 [158/264] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:01:19.645 [159/264] Linking static target lib/librte_mempool.a 00:01:19.645 [160/264] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:19.645 [161/264] Compiling C object drivers/librte_bus_vdev.so.24.0.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:19.645 [162/264] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:01:19.645 [163/264] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:01:19.645 [164/264] Linking static target drivers/librte_bus_vdev.a 00:01:19.645 [165/264] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:01:19.645 [166/264] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:01:19.645 [167/264] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:01:19.645 [168/264] Linking static target lib/librte_power.a 00:01:19.645 [169/264] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:01:19.645 [170/264] Generating symbol file lib/librte_log.so.24.0.p/librte_log.so.24.0.symbols 00:01:19.645 [171/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:01:19.645 [172/264] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:01:19.645 [173/264] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:01:19.645 [174/264] Linking static target lib/librte_eal.a 00:01:19.645 [175/264] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:01:19.645 [176/264] Linking static target lib/librte_security.a 00:01:19.645 [177/264] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:01:19.645 [178/264] Linking target lib/librte_kvargs.so.24.0 00:01:19.645 [179/264] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:01:19.646 [180/264] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:01:19.646 [181/264] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:01:19.646 [182/264] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:01:19.646 [183/264] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:01:19.646 [184/264] Linking static target drivers/libtmp_rte_mempool_ring.a 00:01:19.646 [185/264] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:01:19.646 [186/264] Linking target lib/librte_telemetry.so.24.0 00:01:19.646 [187/264] Linking static target lib/librte_net.a 00:01:19.646 [188/264] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:01:19.646 [189/264] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:01:19.646 [190/264] Compiling C object drivers/librte_bus_pci.so.24.0.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:01:19.646 [191/264] Linking static target drivers/librte_bus_pci.a 00:01:19.646 [192/264] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:01:19.646 [193/264] Linking static target lib/librte_reorder.a 00:01:19.646 [194/264] Generating symbol file lib/librte_kvargs.so.24.0.p/librte_kvargs.so.24.0.symbols 00:01:19.646 [195/264] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:01:19.903 [196/264] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:19.903 [197/264] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:01:19.903 [198/264] Generating symbol file lib/librte_telemetry.so.24.0.p/librte_telemetry.so.24.0.symbols 00:01:19.903 [199/264] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:01:19.903 [200/264] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:01:19.903 [201/264] Compiling C object drivers/librte_mempool_ring.so.24.0.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:01:19.903 [202/264] Linking static target drivers/librte_mempool_ring.a 00:01:19.903 [203/264] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:19.903 [204/264] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:01:19.903 [205/264] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:01:19.903 [206/264] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:01:19.903 [207/264] Linking static target lib/librte_mbuf.a 00:01:19.903 [208/264] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:01:19.903 [209/264] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:01:19.903 [210/264] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:01:19.903 [211/264] Linking static target lib/librte_hash.a 00:01:20.161 [212/264] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:01:20.161 [213/264] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:01:20.161 [214/264] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:20.161 [215/264] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:01:20.161 [216/264] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:01:20.161 [217/264] Linking static target lib/librte_cryptodev.a 00:01:20.420 [218/264] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:01:20.420 [219/264] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:01:20.420 [220/264] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:01:20.723 [221/264] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:01:20.723 [222/264] Linking static target lib/librte_ethdev.a 00:01:21.290 [223/264] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:01:21.548 [224/264] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:23.446 [225/264] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:01:23.446 [226/264] Linking static target lib/librte_vhost.a 00:01:24.819 [227/264] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:01:26.194 [228/264] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:26.194 [229/264] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:01:26.194 [230/264] Linking target lib/librte_eal.so.24.0 00:01:26.194 [231/264] Generating symbol file lib/librte_eal.so.24.0.p/librte_eal.so.24.0.symbols 00:01:26.453 [232/264] Linking target lib/librte_ring.so.24.0 00:01:26.453 [233/264] Linking target lib/librte_pci.so.24.0 00:01:26.453 [234/264] Linking target lib/librte_timer.so.24.0 00:01:26.453 [235/264] Linking target lib/librte_dmadev.so.24.0 00:01:26.453 [236/264] Linking target drivers/librte_bus_vdev.so.24.0 00:01:26.453 [237/264] Linking target lib/librte_meter.so.24.0 00:01:26.453 [238/264] Generating symbol file lib/librte_timer.so.24.0.p/librte_timer.so.24.0.symbols 00:01:26.453 [239/264] Generating symbol file lib/librte_meter.so.24.0.p/librte_meter.so.24.0.symbols 00:01:26.453 [240/264] Generating symbol file lib/librte_pci.so.24.0.p/librte_pci.so.24.0.symbols 00:01:26.453 [241/264] Generating symbol file lib/librte_ring.so.24.0.p/librte_ring.so.24.0.symbols 00:01:26.453 [242/264] Generating symbol file lib/librte_dmadev.so.24.0.p/librte_dmadev.so.24.0.symbols 00:01:26.453 [243/264] Linking target drivers/librte_bus_pci.so.24.0 00:01:26.453 [244/264] Linking target lib/librte_rcu.so.24.0 00:01:26.453 [245/264] Linking target lib/librte_mempool.so.24.0 00:01:26.711 [246/264] Generating symbol file lib/librte_rcu.so.24.0.p/librte_rcu.so.24.0.symbols 00:01:26.711 [247/264] Generating symbol file lib/librte_mempool.so.24.0.p/librte_mempool.so.24.0.symbols 00:01:26.711 [248/264] Linking target lib/librte_mbuf.so.24.0 00:01:26.711 [249/264] Linking target drivers/librte_mempool_ring.so.24.0 00:01:26.711 [250/264] Generating symbol file lib/librte_mbuf.so.24.0.p/librte_mbuf.so.24.0.symbols 00:01:26.711 [251/264] Linking target lib/librte_compressdev.so.24.0 00:01:26.711 [252/264] Linking target lib/librte_net.so.24.0 00:01:26.711 [253/264] Linking target lib/librte_cryptodev.so.24.0 00:01:26.711 [254/264] Linking target lib/librte_reorder.so.24.0 00:01:26.711 [255/264] Generating symbol file lib/librte_net.so.24.0.p/librte_net.so.24.0.symbols 00:01:26.970 [256/264] Generating symbol file lib/librte_cryptodev.so.24.0.p/librte_cryptodev.so.24.0.symbols 00:01:26.970 [257/264] Linking target lib/librte_security.so.24.0 00:01:26.970 [258/264] Linking target lib/librte_hash.so.24.0 00:01:26.970 [259/264] Linking target lib/librte_ethdev.so.24.0 00:01:26.970 [260/264] Linking target lib/librte_cmdline.so.24.0 00:01:26.970 [261/264] Generating symbol file lib/librte_hash.so.24.0.p/librte_hash.so.24.0.symbols 00:01:26.970 [262/264] Generating symbol file lib/librte_ethdev.so.24.0.p/librte_ethdev.so.24.0.symbols 00:01:26.970 [263/264] Linking target lib/librte_power.so.24.0 00:01:26.970 [264/264] Linking target lib/librte_vhost.so.24.0 00:01:26.970 INFO: autodetecting backend as ninja 00:01:26.970 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/dsa-phy-autotest/spdk/dpdk/build-tmp -j 128 00:01:27.538 CC lib/log/log_deprecated.o 00:01:27.538 CC lib/log/log_flags.o 00:01:27.538 CC lib/log/log.o 00:01:27.538 CC lib/ut_mock/mock.o 00:01:27.538 CC lib/ut/ut.o 00:01:27.796 LIB libspdk_ut_mock.a 00:01:27.796 SO libspdk_ut_mock.so.5.0 00:01:27.796 LIB libspdk_log.a 00:01:27.796 SYMLINK libspdk_ut_mock.so 00:01:27.796 SO libspdk_log.so.6.1 00:01:27.796 LIB libspdk_ut.a 00:01:27.796 SO libspdk_ut.so.1.0 00:01:27.796 SYMLINK libspdk_log.so 00:01:27.796 SYMLINK libspdk_ut.so 00:01:28.054 CC lib/util/base64.o 00:01:28.054 CC lib/util/bit_array.o 00:01:28.054 CC lib/util/crc32.o 00:01:28.054 CC lib/util/cpuset.o 00:01:28.054 CC lib/util/crc16.o 00:01:28.054 CC lib/util/crc32c.o 00:01:28.054 CC lib/util/dif.o 00:01:28.054 CC lib/util/fd.o 00:01:28.054 CC lib/util/file.o 00:01:28.054 CC lib/util/crc32_ieee.o 00:01:28.054 CC lib/util/iov.o 00:01:28.054 CC lib/util/crc64.o 00:01:28.054 CC lib/util/pipe.o 00:01:28.054 CC lib/util/hexlify.o 00:01:28.054 CC lib/util/math.o 00:01:28.054 CC lib/dma/dma.o 00:01:28.054 CC lib/util/strerror_tls.o 00:01:28.054 CC lib/util/string.o 00:01:28.054 CC lib/util/uuid.o 00:01:28.054 CC lib/util/fd_group.o 00:01:28.054 CC lib/util/xor.o 00:01:28.054 CC lib/util/zipf.o 00:01:28.054 CC lib/ioat/ioat.o 00:01:28.054 CXX lib/trace_parser/trace.o 00:01:28.054 CC lib/vfio_user/host/vfio_user_pci.o 00:01:28.054 CC lib/vfio_user/host/vfio_user.o 00:01:28.054 LIB libspdk_dma.a 00:01:28.054 SO libspdk_dma.so.3.0 00:01:28.314 SYMLINK libspdk_dma.so 00:01:28.314 LIB libspdk_vfio_user.a 00:01:28.314 SO libspdk_vfio_user.so.4.0 00:01:28.314 LIB libspdk_ioat.a 00:01:28.314 SYMLINK libspdk_vfio_user.so 00:01:28.314 SO libspdk_ioat.so.6.0 00:01:28.314 SYMLINK libspdk_ioat.so 00:01:28.314 LIB libspdk_util.a 00:01:28.573 SO libspdk_util.so.8.0 00:01:28.573 SYMLINK libspdk_util.so 00:01:28.573 CC lib/idxd/idxd.o 00:01:28.573 CC lib/env_dpdk/pci.o 00:01:28.573 CC lib/idxd/idxd_user.o 00:01:28.573 CC lib/env_dpdk/env.o 00:01:28.573 CC lib/env_dpdk/memory.o 00:01:28.573 CC lib/env_dpdk/init.o 00:01:28.573 CC lib/env_dpdk/pci_ioat.o 00:01:28.573 CC lib/env_dpdk/threads.o 00:01:28.573 CC lib/env_dpdk/pci_idxd.o 00:01:28.573 CC lib/env_dpdk/pci_virtio.o 00:01:28.573 CC lib/env_dpdk/pci_event.o 00:01:28.573 CC lib/conf/conf.o 00:01:28.573 CC lib/env_dpdk/pci_vmd.o 00:01:28.573 CC lib/env_dpdk/pci_dpdk_2207.o 00:01:28.573 CC lib/env_dpdk/pci_dpdk.o 00:01:28.573 CC lib/env_dpdk/sigbus_handler.o 00:01:28.573 CC lib/env_dpdk/pci_dpdk_2211.o 00:01:28.573 CC lib/json/json_parse.o 00:01:28.573 CC lib/json/json_write.o 00:01:28.573 CC lib/json/json_util.o 00:01:28.573 CC lib/vmd/vmd.o 00:01:28.573 CC lib/vmd/led.o 00:01:28.831 CC lib/rdma/common.o 00:01:28.831 CC lib/rdma/rdma_verbs.o 00:01:28.831 LIB libspdk_conf.a 00:01:28.831 LIB libspdk_trace_parser.a 00:01:28.831 SO libspdk_conf.so.5.0 00:01:28.831 LIB libspdk_rdma.a 00:01:28.831 SO libspdk_trace_parser.so.4.0 00:01:28.831 SO libspdk_rdma.so.5.0 00:01:28.831 SYMLINK libspdk_conf.so 00:01:29.090 SYMLINK libspdk_rdma.so 00:01:29.090 SYMLINK libspdk_trace_parser.so 00:01:29.090 LIB libspdk_json.a 00:01:29.090 SO libspdk_json.so.5.1 00:01:29.090 SYMLINK libspdk_json.so 00:01:29.090 LIB libspdk_idxd.a 00:01:29.090 SO libspdk_idxd.so.11.0 00:01:29.090 SYMLINK libspdk_idxd.so 00:01:29.347 CC lib/jsonrpc/jsonrpc_server.o 00:01:29.347 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:01:29.347 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:01:29.347 CC lib/jsonrpc/jsonrpc_client.o 00:01:29.347 LIB libspdk_jsonrpc.a 00:01:29.347 LIB libspdk_vmd.a 00:01:29.347 SO libspdk_jsonrpc.so.5.1 00:01:29.347 SO libspdk_vmd.so.5.0 00:01:29.606 SYMLINK libspdk_jsonrpc.so 00:01:29.606 SYMLINK libspdk_vmd.so 00:01:29.606 CC lib/rpc/rpc.o 00:01:29.606 LIB libspdk_env_dpdk.a 00:01:29.606 SO libspdk_env_dpdk.so.13.0 00:01:29.864 LIB libspdk_rpc.a 00:01:29.864 SO libspdk_rpc.so.5.0 00:01:29.864 SYMLINK libspdk_rpc.so 00:01:29.864 SYMLINK libspdk_env_dpdk.so 00:01:29.864 CC lib/trace/trace_flags.o 00:01:29.864 CC lib/trace/trace.o 00:01:29.864 CC lib/trace/trace_rpc.o 00:01:29.864 CC lib/notify/notify.o 00:01:29.864 CC lib/notify/notify_rpc.o 00:01:29.864 CC lib/sock/sock.o 00:01:29.864 CC lib/sock/sock_rpc.o 00:01:30.122 LIB libspdk_notify.a 00:01:30.122 LIB libspdk_trace.a 00:01:30.122 SO libspdk_notify.so.5.0 00:01:30.122 SO libspdk_trace.so.9.0 00:01:30.122 SYMLINK libspdk_notify.so 00:01:30.122 SYMLINK libspdk_trace.so 00:01:30.122 LIB libspdk_sock.a 00:01:30.381 SO libspdk_sock.so.8.0 00:01:30.381 SYMLINK libspdk_sock.so 00:01:30.381 CC lib/thread/iobuf.o 00:01:30.381 CC lib/thread/thread.o 00:01:30.381 CC lib/nvme/nvme_ctrlr_cmd.o 00:01:30.381 CC lib/nvme/nvme_fabric.o 00:01:30.381 CC lib/nvme/nvme_ns.o 00:01:30.381 CC lib/nvme/nvme_ctrlr.o 00:01:30.381 CC lib/nvme/nvme_ns_cmd.o 00:01:30.381 CC lib/nvme/nvme_pcie.o 00:01:30.381 CC lib/nvme/nvme_pcie_common.o 00:01:30.381 CC lib/nvme/nvme.o 00:01:30.381 CC lib/nvme/nvme_qpair.o 00:01:30.381 CC lib/nvme/nvme_transport.o 00:01:30.381 CC lib/nvme/nvme_quirks.o 00:01:30.381 CC lib/nvme/nvme_discovery.o 00:01:30.381 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:01:30.381 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:01:30.381 CC lib/nvme/nvme_tcp.o 00:01:30.381 CC lib/nvme/nvme_opal.o 00:01:30.381 CC lib/nvme/nvme_poll_group.o 00:01:30.381 CC lib/nvme/nvme_io_msg.o 00:01:30.381 CC lib/nvme/nvme_cuse.o 00:01:30.381 CC lib/nvme/nvme_zns.o 00:01:30.381 CC lib/nvme/nvme_rdma.o 00:01:30.381 CC lib/nvme/nvme_vfio_user.o 00:01:31.758 LIB libspdk_thread.a 00:01:31.758 SO libspdk_thread.so.9.0 00:01:31.758 SYMLINK libspdk_thread.so 00:01:31.758 CC lib/virtio/virtio.o 00:01:31.758 CC lib/virtio/virtio_vfio_user.o 00:01:31.758 CC lib/virtio/virtio_vhost_user.o 00:01:31.758 CC lib/virtio/virtio_pci.o 00:01:31.758 CC lib/blob/blobstore.o 00:01:31.758 CC lib/blob/blob_bs_dev.o 00:01:31.758 CC lib/blob/request.o 00:01:31.758 CC lib/blob/zeroes.o 00:01:31.758 CC lib/accel/accel.o 00:01:31.758 CC lib/accel/accel_rpc.o 00:01:31.758 CC lib/accel/accel_sw.o 00:01:31.758 CC lib/init/subsystem.o 00:01:31.758 CC lib/init/json_config.o 00:01:31.758 CC lib/init/subsystem_rpc.o 00:01:31.758 CC lib/init/rpc.o 00:01:31.758 LIB libspdk_nvme.a 00:01:32.017 LIB libspdk_init.a 00:01:32.017 SO libspdk_nvme.so.12.0 00:01:32.017 SO libspdk_init.so.4.0 00:01:32.017 LIB libspdk_virtio.a 00:01:32.017 SYMLINK libspdk_init.so 00:01:32.017 SO libspdk_virtio.so.6.0 00:01:32.017 SYMLINK libspdk_virtio.so 00:01:32.276 SYMLINK libspdk_nvme.so 00:01:32.276 CC lib/event/app.o 00:01:32.276 CC lib/event/log_rpc.o 00:01:32.276 CC lib/event/reactor.o 00:01:32.276 CC lib/event/scheduler_static.o 00:01:32.276 CC lib/event/app_rpc.o 00:01:32.276 LIB libspdk_accel.a 00:01:32.276 SO libspdk_accel.so.14.0 00:01:32.535 SYMLINK libspdk_accel.so 00:01:32.535 CC lib/bdev/bdev.o 00:01:32.535 CC lib/bdev/bdev_rpc.o 00:01:32.535 CC lib/bdev/bdev_zone.o 00:01:32.535 CC lib/bdev/part.o 00:01:32.535 CC lib/bdev/scsi_nvme.o 00:01:32.535 LIB libspdk_event.a 00:01:32.796 SO libspdk_event.so.12.0 00:01:32.796 SYMLINK libspdk_event.so 00:01:34.700 LIB libspdk_blob.a 00:01:34.700 SO libspdk_blob.so.10.1 00:01:34.700 SYMLINK libspdk_blob.so 00:01:34.700 CC lib/lvol/lvol.o 00:01:34.700 CC lib/blobfs/blobfs.o 00:01:34.700 CC lib/blobfs/tree.o 00:01:34.700 LIB libspdk_bdev.a 00:01:34.700 SO libspdk_bdev.so.14.0 00:01:34.700 SYMLINK libspdk_bdev.so 00:01:34.959 CC lib/nvmf/ctrlr.o 00:01:34.959 CC lib/nvmf/ctrlr_bdev.o 00:01:34.959 CC lib/nvmf/nvmf.o 00:01:34.959 CC lib/nvmf/ctrlr_discovery.o 00:01:34.959 CC lib/nvmf/subsystem.o 00:01:34.959 CC lib/nvmf/nvmf_rpc.o 00:01:34.959 CC lib/nbd/nbd.o 00:01:34.959 CC lib/nbd/nbd_rpc.o 00:01:34.959 CC lib/nvmf/transport.o 00:01:34.959 CC lib/nvmf/tcp.o 00:01:34.959 CC lib/nvmf/rdma.o 00:01:34.959 CC lib/ftl/ftl_init.o 00:01:34.959 CC lib/ftl/ftl_core.o 00:01:34.959 CC lib/ftl/ftl_debug.o 00:01:34.959 CC lib/ftl/ftl_sb.o 00:01:34.959 CC lib/ftl/ftl_l2p.o 00:01:34.959 CC lib/ftl/ftl_io.o 00:01:34.959 CC lib/ftl/ftl_l2p_flat.o 00:01:34.959 CC lib/ftl/ftl_layout.o 00:01:34.959 CC lib/ftl/ftl_writer.o 00:01:34.959 CC lib/ftl/ftl_nv_cache.o 00:01:34.959 CC lib/ftl/ftl_band.o 00:01:34.959 CC lib/ftl/ftl_reloc.o 00:01:34.959 CC lib/ftl/ftl_rq.o 00:01:34.959 CC lib/ftl/ftl_band_ops.o 00:01:34.959 CC lib/ftl/ftl_p2l.o 00:01:34.959 CC lib/scsi/lun.o 00:01:34.959 CC lib/ftl/mngt/ftl_mngt.o 00:01:34.959 CC lib/scsi/dev.o 00:01:34.959 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:01:34.959 CC lib/ftl/ftl_l2p_cache.o 00:01:34.959 CC lib/scsi/scsi_bdev.o 00:01:34.959 CC lib/ftl/mngt/ftl_mngt_startup.o 00:01:34.959 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:01:34.959 CC lib/ftl/mngt/ftl_mngt_md.o 00:01:34.959 CC lib/scsi/port.o 00:01:34.959 CC lib/scsi/scsi_rpc.o 00:01:34.959 CC lib/scsi/scsi.o 00:01:34.959 CC lib/ftl/mngt/ftl_mngt_misc.o 00:01:34.959 CC lib/scsi/scsi_pr.o 00:01:34.959 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:01:34.959 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:01:34.959 CC lib/scsi/task.o 00:01:34.959 CC lib/ftl/mngt/ftl_mngt_band.o 00:01:34.959 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:01:34.959 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:01:34.959 CC lib/ublk/ublk_rpc.o 00:01:34.959 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:01:34.959 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:01:34.959 CC lib/ftl/utils/ftl_conf.o 00:01:34.959 CC lib/ftl/utils/ftl_md.o 00:01:34.959 CC lib/ublk/ublk.o 00:01:34.959 CC lib/ftl/utils/ftl_mempool.o 00:01:34.959 CC lib/ftl/utils/ftl_bitmap.o 00:01:34.959 CC lib/ftl/utils/ftl_property.o 00:01:34.959 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:01:34.959 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:01:34.959 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:01:34.959 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:01:34.959 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:01:34.959 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:01:34.959 CC lib/ftl/upgrade/ftl_sb_v5.o 00:01:34.959 CC lib/ftl/upgrade/ftl_sb_v3.o 00:01:34.959 CC lib/ftl/nvc/ftl_nvc_dev.o 00:01:34.959 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:01:34.959 CC lib/ftl/base/ftl_base_bdev.o 00:01:34.959 CC lib/ftl/ftl_trace.o 00:01:34.959 CC lib/ftl/base/ftl_base_dev.o 00:01:35.218 LIB libspdk_lvol.a 00:01:35.476 SO libspdk_lvol.so.9.1 00:01:35.476 SYMLINK libspdk_lvol.so 00:01:35.476 LIB libspdk_blobfs.a 00:01:35.476 SO libspdk_blobfs.so.9.0 00:01:35.735 LIB libspdk_nbd.a 00:01:35.735 LIB libspdk_scsi.a 00:01:35.735 SYMLINK libspdk_blobfs.so 00:01:35.735 SO libspdk_nbd.so.6.0 00:01:35.735 SO libspdk_scsi.so.8.0 00:01:35.735 SYMLINK libspdk_nbd.so 00:01:35.735 LIB libspdk_ublk.a 00:01:35.735 SO libspdk_ublk.so.2.0 00:01:35.735 SYMLINK libspdk_scsi.so 00:01:35.735 SYMLINK libspdk_ublk.so 00:01:35.993 CC lib/iscsi/conn.o 00:01:35.993 CC lib/iscsi/init_grp.o 00:01:35.993 CC lib/iscsi/param.o 00:01:35.993 CC lib/iscsi/iscsi.o 00:01:35.993 CC lib/iscsi/md5.o 00:01:35.993 CC lib/iscsi/portal_grp.o 00:01:35.993 LIB libspdk_ftl.a 00:01:35.993 CC lib/iscsi/iscsi_subsystem.o 00:01:35.993 CC lib/iscsi/tgt_node.o 00:01:35.993 CC lib/iscsi/task.o 00:01:35.993 CC lib/iscsi/iscsi_rpc.o 00:01:35.993 CC lib/vhost/vhost.o 00:01:35.993 CC lib/vhost/vhost_blk.o 00:01:35.993 CC lib/vhost/vhost_scsi.o 00:01:35.993 CC lib/vhost/vhost_rpc.o 00:01:35.993 CC lib/vhost/rte_vhost_user.o 00:01:35.993 SO libspdk_ftl.so.8.0 00:01:36.252 SYMLINK libspdk_ftl.so 00:01:36.820 LIB libspdk_nvmf.a 00:01:36.820 SO libspdk_nvmf.so.17.0 00:01:36.820 SYMLINK libspdk_nvmf.so 00:01:37.078 LIB libspdk_iscsi.a 00:01:37.078 LIB libspdk_vhost.a 00:01:37.078 SO libspdk_vhost.so.7.1 00:01:37.078 SO libspdk_iscsi.so.7.0 00:01:37.078 SYMLINK libspdk_vhost.so 00:01:37.078 SYMLINK libspdk_iscsi.so 00:01:37.337 CC module/env_dpdk/env_dpdk_rpc.o 00:01:37.337 CC module/scheduler/gscheduler/gscheduler.o 00:01:37.337 CC module/blob/bdev/blob_bdev.o 00:01:37.337 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:01:37.337 CC module/accel/error/accel_error.o 00:01:37.337 CC module/accel/error/accel_error_rpc.o 00:01:37.337 CC module/scheduler/dynamic/scheduler_dynamic.o 00:01:37.337 CC module/sock/posix/posix.o 00:01:37.337 CC module/accel/dsa/accel_dsa.o 00:01:37.337 CC module/accel/ioat/accel_ioat.o 00:01:37.337 CC module/accel/ioat/accel_ioat_rpc.o 00:01:37.337 CC module/accel/dsa/accel_dsa_rpc.o 00:01:37.337 CC module/accel/iaa/accel_iaa.o 00:01:37.337 CC module/accel/iaa/accel_iaa_rpc.o 00:01:37.595 LIB libspdk_env_dpdk_rpc.a 00:01:37.595 LIB libspdk_scheduler_dpdk_governor.a 00:01:37.595 SO libspdk_env_dpdk_rpc.so.5.0 00:01:37.595 LIB libspdk_scheduler_dynamic.a 00:01:37.595 SO libspdk_scheduler_dpdk_governor.so.3.0 00:01:37.595 LIB libspdk_scheduler_gscheduler.a 00:01:37.595 SYMLINK libspdk_env_dpdk_rpc.so 00:01:37.595 LIB libspdk_accel_ioat.a 00:01:37.595 SO libspdk_scheduler_dynamic.so.3.0 00:01:37.595 SO libspdk_accel_ioat.so.5.0 00:01:37.595 SO libspdk_scheduler_gscheduler.so.3.0 00:01:37.595 SYMLINK libspdk_scheduler_dpdk_governor.so 00:01:37.595 LIB libspdk_accel_error.a 00:01:37.595 SYMLINK libspdk_scheduler_gscheduler.so 00:01:37.595 SYMLINK libspdk_scheduler_dynamic.so 00:01:37.595 LIB libspdk_accel_iaa.a 00:01:37.595 SO libspdk_accel_error.so.1.0 00:01:37.595 SYMLINK libspdk_accel_ioat.so 00:01:37.595 SO libspdk_accel_iaa.so.2.0 00:01:37.595 LIB libspdk_accel_dsa.a 00:01:37.595 SO libspdk_accel_dsa.so.4.0 00:01:37.595 LIB libspdk_blob_bdev.a 00:01:37.595 SYMLINK libspdk_accel_error.so 00:01:37.595 SYMLINK libspdk_accel_iaa.so 00:01:37.595 SO libspdk_blob_bdev.so.10.1 00:01:37.853 SYMLINK libspdk_accel_dsa.so 00:01:37.853 SYMLINK libspdk_blob_bdev.so 00:01:37.853 CC module/blobfs/bdev/blobfs_bdev.o 00:01:37.853 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:01:37.853 CC module/bdev/zone_block/vbdev_zone_block.o 00:01:37.853 CC module/bdev/null/bdev_null_rpc.o 00:01:37.853 CC module/bdev/null/bdev_null.o 00:01:37.853 CC module/bdev/gpt/vbdev_gpt.o 00:01:37.853 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:01:37.853 CC module/bdev/malloc/bdev_malloc.o 00:01:37.853 CC module/bdev/gpt/gpt.o 00:01:37.853 CC module/bdev/malloc/bdev_malloc_rpc.o 00:01:37.853 CC module/bdev/passthru/vbdev_passthru.o 00:01:37.853 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:01:37.853 CC module/bdev/lvol/vbdev_lvol.o 00:01:37.853 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:01:37.853 CC module/bdev/nvme/bdev_nvme_rpc.o 00:01:37.853 CC module/bdev/delay/vbdev_delay.o 00:01:37.853 CC module/bdev/split/vbdev_split.o 00:01:37.853 CC module/bdev/ftl/bdev_ftl.o 00:01:37.853 CC module/bdev/virtio/bdev_virtio_scsi.o 00:01:37.853 CC module/bdev/nvme/nvme_rpc.o 00:01:37.853 CC module/bdev/nvme/bdev_nvme.o 00:01:37.853 CC module/bdev/virtio/bdev_virtio_blk.o 00:01:37.853 CC module/bdev/ftl/bdev_ftl_rpc.o 00:01:37.853 CC module/bdev/split/vbdev_split_rpc.o 00:01:37.853 CC module/bdev/nvme/vbdev_opal_rpc.o 00:01:37.853 CC module/bdev/nvme/bdev_mdns_client.o 00:01:37.853 CC module/bdev/delay/vbdev_delay_rpc.o 00:01:37.853 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:01:37.853 CC module/bdev/virtio/bdev_virtio_rpc.o 00:01:37.853 CC module/bdev/iscsi/bdev_iscsi.o 00:01:37.853 CC module/bdev/nvme/vbdev_opal.o 00:01:37.853 CC module/bdev/raid/bdev_raid_rpc.o 00:01:37.853 CC module/bdev/raid/bdev_raid.o 00:01:37.853 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:01:37.853 CC module/bdev/raid/bdev_raid_sb.o 00:01:38.111 CC module/bdev/aio/bdev_aio.o 00:01:38.111 CC module/bdev/raid/raid0.o 00:01:38.111 CC module/bdev/aio/bdev_aio_rpc.o 00:01:38.111 CC module/bdev/error/vbdev_error_rpc.o 00:01:38.111 CC module/bdev/raid/raid1.o 00:01:38.111 CC module/bdev/error/vbdev_error.o 00:01:38.111 CC module/bdev/raid/concat.o 00:01:38.111 LIB libspdk_blobfs_bdev.a 00:01:38.111 LIB libspdk_sock_posix.a 00:01:38.111 SO libspdk_blobfs_bdev.so.5.0 00:01:38.111 SO libspdk_sock_posix.so.5.0 00:01:38.369 SYMLINK libspdk_blobfs_bdev.so 00:01:38.369 LIB libspdk_bdev_split.a 00:01:38.369 LIB libspdk_bdev_null.a 00:01:38.369 SYMLINK libspdk_sock_posix.so 00:01:38.369 LIB libspdk_bdev_zone_block.a 00:01:38.369 SO libspdk_bdev_split.so.5.0 00:01:38.369 SO libspdk_bdev_null.so.5.0 00:01:38.369 SO libspdk_bdev_zone_block.so.5.0 00:01:38.369 LIB libspdk_bdev_ftl.a 00:01:38.369 SO libspdk_bdev_ftl.so.5.0 00:01:38.369 LIB libspdk_bdev_aio.a 00:01:38.369 SYMLINK libspdk_bdev_null.so 00:01:38.369 SYMLINK libspdk_bdev_zone_block.so 00:01:38.369 SYMLINK libspdk_bdev_split.so 00:01:38.369 LIB libspdk_bdev_gpt.a 00:01:38.369 LIB libspdk_bdev_error.a 00:01:38.369 LIB libspdk_bdev_passthru.a 00:01:38.369 SO libspdk_bdev_aio.so.5.0 00:01:38.369 LIB libspdk_bdev_iscsi.a 00:01:38.369 SO libspdk_bdev_gpt.so.5.0 00:01:38.369 SO libspdk_bdev_error.so.5.0 00:01:38.369 SO libspdk_bdev_passthru.so.5.0 00:01:38.369 SO libspdk_bdev_iscsi.so.5.0 00:01:38.369 SYMLINK libspdk_bdev_ftl.so 00:01:38.369 LIB libspdk_bdev_malloc.a 00:01:38.369 LIB libspdk_bdev_delay.a 00:01:38.369 SO libspdk_bdev_malloc.so.5.0 00:01:38.369 SYMLINK libspdk_bdev_aio.so 00:01:38.369 SYMLINK libspdk_bdev_gpt.so 00:01:38.369 SYMLINK libspdk_bdev_iscsi.so 00:01:38.369 SYMLINK libspdk_bdev_error.so 00:01:38.369 LIB libspdk_bdev_virtio.a 00:01:38.369 SYMLINK libspdk_bdev_passthru.so 00:01:38.369 SO libspdk_bdev_delay.so.5.0 00:01:38.369 SYMLINK libspdk_bdev_malloc.so 00:01:38.369 SO libspdk_bdev_virtio.so.5.0 00:01:38.628 SYMLINK libspdk_bdev_delay.so 00:01:38.628 SYMLINK libspdk_bdev_virtio.so 00:01:38.628 LIB libspdk_bdev_lvol.a 00:01:38.628 SO libspdk_bdev_lvol.so.5.0 00:01:38.628 SYMLINK libspdk_bdev_lvol.so 00:01:38.887 LIB libspdk_bdev_raid.a 00:01:38.887 SO libspdk_bdev_raid.so.5.0 00:01:38.887 SYMLINK libspdk_bdev_raid.so 00:01:39.846 LIB libspdk_bdev_nvme.a 00:01:39.846 SO libspdk_bdev_nvme.so.6.0 00:01:39.846 SYMLINK libspdk_bdev_nvme.so 00:01:40.103 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:01:40.103 CC module/event/subsystems/iobuf/iobuf.o 00:01:40.103 CC module/event/subsystems/vmd/vmd.o 00:01:40.103 CC module/event/subsystems/vmd/vmd_rpc.o 00:01:40.103 CC module/event/subsystems/scheduler/scheduler.o 00:01:40.103 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:01:40.103 CC module/event/subsystems/sock/sock.o 00:01:40.361 LIB libspdk_event_vmd.a 00:01:40.361 LIB libspdk_event_scheduler.a 00:01:40.361 LIB libspdk_event_iobuf.a 00:01:40.361 LIB libspdk_event_sock.a 00:01:40.361 LIB libspdk_event_vhost_blk.a 00:01:40.361 SO libspdk_event_vmd.so.5.0 00:01:40.361 SO libspdk_event_scheduler.so.3.0 00:01:40.361 SO libspdk_event_iobuf.so.2.0 00:01:40.361 SO libspdk_event_vhost_blk.so.2.0 00:01:40.361 SO libspdk_event_sock.so.4.0 00:01:40.361 SYMLINK libspdk_event_vmd.so 00:01:40.361 SYMLINK libspdk_event_iobuf.so 00:01:40.361 SYMLINK libspdk_event_scheduler.so 00:01:40.361 SYMLINK libspdk_event_sock.so 00:01:40.361 SYMLINK libspdk_event_vhost_blk.so 00:01:40.619 CC module/event/subsystems/accel/accel.o 00:01:40.619 LIB libspdk_event_accel.a 00:01:40.619 SO libspdk_event_accel.so.5.0 00:01:40.619 SYMLINK libspdk_event_accel.so 00:01:40.877 CC module/event/subsystems/bdev/bdev.o 00:01:40.877 LIB libspdk_event_bdev.a 00:01:41.251 SO libspdk_event_bdev.so.5.0 00:01:41.251 SYMLINK libspdk_event_bdev.so 00:01:41.251 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:01:41.251 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:01:41.251 CC module/event/subsystems/scsi/scsi.o 00:01:41.251 CC module/event/subsystems/ublk/ublk.o 00:01:41.251 CC module/event/subsystems/nbd/nbd.o 00:01:41.251 LIB libspdk_event_scsi.a 00:01:41.251 SO libspdk_event_scsi.so.5.0 00:01:41.251 LIB libspdk_event_ublk.a 00:01:41.251 LIB libspdk_event_nbd.a 00:01:41.509 SO libspdk_event_nbd.so.5.0 00:01:41.509 SO libspdk_event_ublk.so.2.0 00:01:41.509 SYMLINK libspdk_event_scsi.so 00:01:41.509 LIB libspdk_event_nvmf.a 00:01:41.509 SYMLINK libspdk_event_ublk.so 00:01:41.509 SYMLINK libspdk_event_nbd.so 00:01:41.509 SO libspdk_event_nvmf.so.5.0 00:01:41.509 SYMLINK libspdk_event_nvmf.so 00:01:41.509 CC module/event/subsystems/iscsi/iscsi.o 00:01:41.509 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:01:41.766 LIB libspdk_event_iscsi.a 00:01:41.766 LIB libspdk_event_vhost_scsi.a 00:01:41.766 SO libspdk_event_iscsi.so.5.0 00:01:41.766 SO libspdk_event_vhost_scsi.so.2.0 00:01:41.766 SYMLINK libspdk_event_iscsi.so 00:01:41.766 SYMLINK libspdk_event_vhost_scsi.so 00:01:41.766 SO libspdk.so.5.0 00:01:41.766 SYMLINK libspdk.so 00:01:42.024 CC app/spdk_nvme_perf/perf.o 00:01:42.024 CC app/trace_record/trace_record.o 00:01:42.024 CXX app/trace/trace.o 00:01:42.024 CC app/spdk_lspci/spdk_lspci.o 00:01:42.024 CC app/spdk_nvme_identify/identify.o 00:01:42.024 TEST_HEADER include/spdk/accel_module.h 00:01:42.024 TEST_HEADER include/spdk/accel.h 00:01:42.024 TEST_HEADER include/spdk/assert.h 00:01:42.024 TEST_HEADER include/spdk/barrier.h 00:01:42.024 CC app/spdk_top/spdk_top.o 00:01:42.024 CC app/spdk_nvme_discover/discovery_aer.o 00:01:42.024 TEST_HEADER include/spdk/base64.h 00:01:42.024 CC test/rpc_client/rpc_client_test.o 00:01:42.024 CC app/iscsi_tgt/iscsi_tgt.o 00:01:42.024 TEST_HEADER include/spdk/bdev_zone.h 00:01:42.024 TEST_HEADER include/spdk/bdev_module.h 00:01:42.024 TEST_HEADER include/spdk/bdev.h 00:01:42.024 TEST_HEADER include/spdk/bit_pool.h 00:01:42.024 TEST_HEADER include/spdk/bit_array.h 00:01:42.024 TEST_HEADER include/spdk/blob_bdev.h 00:01:42.024 CC app/nvmf_tgt/nvmf_main.o 00:01:42.024 TEST_HEADER include/spdk/blobfs.h 00:01:42.024 TEST_HEADER include/spdk/blobfs_bdev.h 00:01:42.024 TEST_HEADER include/spdk/blob.h 00:01:42.024 TEST_HEADER include/spdk/conf.h 00:01:42.024 TEST_HEADER include/spdk/config.h 00:01:42.024 TEST_HEADER include/spdk/cpuset.h 00:01:42.024 TEST_HEADER include/spdk/crc32.h 00:01:42.024 TEST_HEADER include/spdk/crc16.h 00:01:42.024 TEST_HEADER include/spdk/dif.h 00:01:42.024 TEST_HEADER include/spdk/endian.h 00:01:42.024 TEST_HEADER include/spdk/dma.h 00:01:42.024 TEST_HEADER include/spdk/crc64.h 00:01:42.024 TEST_HEADER include/spdk/env.h 00:01:42.024 TEST_HEADER include/spdk/env_dpdk.h 00:01:42.024 TEST_HEADER include/spdk/event.h 00:01:42.024 CC examples/interrupt_tgt/interrupt_tgt.o 00:01:42.024 CC app/spdk_dd/spdk_dd.o 00:01:42.024 TEST_HEADER include/spdk/fd_group.h 00:01:42.024 TEST_HEADER include/spdk/fd.h 00:01:42.024 TEST_HEADER include/spdk/file.h 00:01:42.024 TEST_HEADER include/spdk/ftl.h 00:01:42.024 TEST_HEADER include/spdk/gpt_spec.h 00:01:42.024 TEST_HEADER include/spdk/histogram_data.h 00:01:42.024 TEST_HEADER include/spdk/idxd.h 00:01:42.024 TEST_HEADER include/spdk/hexlify.h 00:01:42.024 TEST_HEADER include/spdk/init.h 00:01:42.024 TEST_HEADER include/spdk/ioat.h 00:01:42.024 TEST_HEADER include/spdk/idxd_spec.h 00:01:42.024 CC app/vhost/vhost.o 00:01:42.024 TEST_HEADER include/spdk/iscsi_spec.h 00:01:42.024 TEST_HEADER include/spdk/ioat_spec.h 00:01:42.024 TEST_HEADER include/spdk/json.h 00:01:42.024 TEST_HEADER include/spdk/jsonrpc.h 00:01:42.024 TEST_HEADER include/spdk/likely.h 00:01:42.024 TEST_HEADER include/spdk/log.h 00:01:42.024 TEST_HEADER include/spdk/memory.h 00:01:42.024 TEST_HEADER include/spdk/lvol.h 00:01:42.024 TEST_HEADER include/spdk/nbd.h 00:01:42.024 CC app/spdk_tgt/spdk_tgt.o 00:01:42.024 TEST_HEADER include/spdk/notify.h 00:01:42.024 TEST_HEADER include/spdk/nvme.h 00:01:42.024 TEST_HEADER include/spdk/nvme_intel.h 00:01:42.024 TEST_HEADER include/spdk/mmio.h 00:01:42.024 TEST_HEADER include/spdk/nvme_ocssd.h 00:01:42.024 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:01:42.024 TEST_HEADER include/spdk/nvme_spec.h 00:01:42.024 TEST_HEADER include/spdk/nvme_zns.h 00:01:42.024 TEST_HEADER include/spdk/nvmf_cmd.h 00:01:42.024 TEST_HEADER include/spdk/nvmf.h 00:01:42.024 TEST_HEADER include/spdk/nvmf_spec.h 00:01:42.024 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:01:42.024 TEST_HEADER include/spdk/opal_spec.h 00:01:42.024 TEST_HEADER include/spdk/opal.h 00:01:42.024 TEST_HEADER include/spdk/nvmf_transport.h 00:01:42.024 TEST_HEADER include/spdk/pci_ids.h 00:01:42.024 TEST_HEADER include/spdk/pipe.h 00:01:42.024 TEST_HEADER include/spdk/reduce.h 00:01:42.024 TEST_HEADER include/spdk/queue.h 00:01:42.024 TEST_HEADER include/spdk/scheduler.h 00:01:42.024 TEST_HEADER include/spdk/scsi.h 00:01:42.024 TEST_HEADER include/spdk/rpc.h 00:01:42.024 TEST_HEADER include/spdk/scsi_spec.h 00:01:42.024 TEST_HEADER include/spdk/sock.h 00:01:42.024 TEST_HEADER include/spdk/trace.h 00:01:42.024 TEST_HEADER include/spdk/string.h 00:01:42.024 TEST_HEADER include/spdk/trace_parser.h 00:01:42.024 TEST_HEADER include/spdk/stdinc.h 00:01:42.024 TEST_HEADER include/spdk/ublk.h 00:01:42.024 TEST_HEADER include/spdk/thread.h 00:01:42.024 TEST_HEADER include/spdk/uuid.h 00:01:42.024 TEST_HEADER include/spdk/version.h 00:01:42.024 CC examples/idxd/perf/perf.o 00:01:42.024 CC app/fio/nvme/fio_plugin.o 00:01:42.024 TEST_HEADER include/spdk/vfio_user_pci.h 00:01:42.024 TEST_HEADER include/spdk/tree.h 00:01:42.024 TEST_HEADER include/spdk/vhost.h 00:01:42.024 TEST_HEADER include/spdk/vmd.h 00:01:42.024 TEST_HEADER include/spdk/util.h 00:01:42.024 TEST_HEADER include/spdk/xor.h 00:01:42.024 TEST_HEADER include/spdk/zipf.h 00:01:42.024 TEST_HEADER include/spdk/vfio_user_spec.h 00:01:42.024 CXX test/cpp_headers/accel_module.o 00:01:42.024 CXX test/cpp_headers/accel.o 00:01:42.024 CXX test/cpp_headers/assert.o 00:01:42.024 CC examples/ioat/verify/verify.o 00:01:42.024 CXX test/cpp_headers/barrier.o 00:01:42.024 CXX test/cpp_headers/base64.o 00:01:42.024 CXX test/cpp_headers/bdev.o 00:01:42.289 CC examples/nvme/cmb_copy/cmb_copy.o 00:01:42.289 CXX test/cpp_headers/bit_array.o 00:01:42.289 CXX test/cpp_headers/bit_pool.o 00:01:42.289 CXX test/cpp_headers/bdev_module.o 00:01:42.289 CXX test/cpp_headers/bdev_zone.o 00:01:42.289 CXX test/cpp_headers/blobfs_bdev.o 00:01:42.289 CXX test/cpp_headers/blobfs.o 00:01:42.289 CXX test/cpp_headers/blob_bdev.o 00:01:42.289 CXX test/cpp_headers/conf.o 00:01:42.289 CXX test/cpp_headers/config.o 00:01:42.289 CXX test/cpp_headers/crc16.o 00:01:42.289 CXX test/cpp_headers/blob.o 00:01:42.289 CXX test/cpp_headers/cpuset.o 00:01:42.289 CXX test/cpp_headers/crc32.o 00:01:42.289 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:01:42.289 CXX test/cpp_headers/dma.o 00:01:42.289 CC examples/ioat/perf/perf.o 00:01:42.289 CC examples/bdev/hello_world/hello_bdev.o 00:01:42.289 CXX test/cpp_headers/dif.o 00:01:42.289 CXX test/cpp_headers/crc64.o 00:01:42.289 CC examples/nvme/hotplug/hotplug.o 00:01:42.289 CC examples/nvme/hello_world/hello_world.o 00:01:42.289 CXX test/cpp_headers/event.o 00:01:42.289 CXX test/cpp_headers/env_dpdk.o 00:01:42.289 CXX test/cpp_headers/env.o 00:01:42.289 CXX test/cpp_headers/endian.o 00:01:42.289 CC examples/accel/perf/accel_perf.o 00:01:42.289 CXX test/cpp_headers/fd.o 00:01:42.289 CXX test/cpp_headers/fd_group.o 00:01:42.289 CC examples/sock/hello_world/hello_sock.o 00:01:42.289 CXX test/cpp_headers/ftl.o 00:01:42.289 CC test/env/memory/memory_ut.o 00:01:42.289 CXX test/cpp_headers/file.o 00:01:42.289 CXX test/cpp_headers/hexlify.o 00:01:42.289 CXX test/cpp_headers/idxd.o 00:01:42.289 CC test/nvme/reset/reset.o 00:01:42.289 CXX test/cpp_headers/idxd_spec.o 00:01:42.289 CXX test/cpp_headers/gpt_spec.o 00:01:42.289 CXX test/cpp_headers/init.o 00:01:42.289 CC test/nvme/fused_ordering/fused_ordering.o 00:01:42.289 CXX test/cpp_headers/ioat_spec.o 00:01:42.289 CXX test/cpp_headers/histogram_data.o 00:01:42.289 CXX test/cpp_headers/ioat.o 00:01:42.289 CC examples/nvme/reconnect/reconnect.o 00:01:42.289 CXX test/cpp_headers/likely.o 00:01:42.289 CC examples/blob/hello_world/hello_blob.o 00:01:42.289 CXX test/cpp_headers/log.o 00:01:42.289 CXX test/cpp_headers/iscsi_spec.o 00:01:42.289 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:01:42.289 CC test/nvme/boot_partition/boot_partition.o 00:01:42.289 CXX test/cpp_headers/json.o 00:01:42.289 CC examples/nvme/arbitration/arbitration.o 00:01:42.289 CXX test/cpp_headers/mmio.o 00:01:42.289 CXX test/cpp_headers/jsonrpc.o 00:01:42.289 CC examples/bdev/bdevperf/bdevperf.o 00:01:42.289 CC test/nvme/reserve/reserve.o 00:01:42.289 CXX test/cpp_headers/nbd.o 00:01:42.289 CXX test/cpp_headers/lvol.o 00:01:42.289 CXX test/cpp_headers/memory.o 00:01:42.289 CXX test/cpp_headers/notify.o 00:01:42.289 CC test/nvme/fdp/fdp.o 00:01:42.289 CXX test/cpp_headers/nvme.o 00:01:42.289 CC test/dma/test_dma/test_dma.o 00:01:42.289 CC test/accel/dif/dif.o 00:01:42.289 CC examples/nvmf/nvmf/nvmf.o 00:01:42.289 CXX test/cpp_headers/nvme_intel.o 00:01:42.289 CC examples/blob/cli/blobcli.o 00:01:42.289 CXX test/cpp_headers/nvme_ocssd_spec.o 00:01:42.289 CXX test/cpp_headers/nvme_ocssd.o 00:01:42.289 CXX test/cpp_headers/nvme_spec.o 00:01:42.289 CC test/nvme/aer/aer.o 00:01:42.289 CC test/nvme/connect_stress/connect_stress.o 00:01:42.289 CC app/fio/bdev/fio_plugin.o 00:01:42.289 CC examples/nvme/nvme_manage/nvme_manage.o 00:01:42.289 CC test/env/vtophys/vtophys.o 00:01:42.289 CC test/nvme/overhead/overhead.o 00:01:42.289 CC test/bdev/bdevio/bdevio.o 00:01:42.289 CC test/nvme/startup/startup.o 00:01:42.289 CC test/nvme/sgl/sgl.o 00:01:42.289 CC examples/nvme/abort/abort.o 00:01:42.289 CC test/app/jsoncat/jsoncat.o 00:01:42.289 CC test/app/stub/stub.o 00:01:42.289 CC test/nvme/err_injection/err_injection.o 00:01:42.289 CC examples/vmd/lsvmd/lsvmd.o 00:01:42.289 CC test/app/histogram_perf/histogram_perf.o 00:01:42.289 CC test/event/reactor_perf/reactor_perf.o 00:01:42.289 CC test/nvme/compliance/nvme_compliance.o 00:01:42.289 CC test/nvme/e2edp/nvme_dp.o 00:01:42.289 CC test/event/event_perf/event_perf.o 00:01:42.289 CC test/app/bdev_svc/bdev_svc.o 00:01:42.289 CC test/thread/poller_perf/poller_perf.o 00:01:42.289 CC test/nvme/simple_copy/simple_copy.o 00:01:42.289 CXX test/cpp_headers/nvme_zns.o 00:01:42.289 LINK spdk_lspci 00:01:42.289 CC test/env/pci/pci_ut.o 00:01:42.289 CC examples/vmd/led/led.o 00:01:42.289 CC test/nvme/doorbell_aers/doorbell_aers.o 00:01:42.289 CC test/nvme/cuse/cuse.o 00:01:42.289 CC test/event/app_repeat/app_repeat.o 00:01:42.289 CC test/blobfs/mkfs/mkfs.o 00:01:42.289 CC examples/thread/thread/thread_ex.o 00:01:42.289 CC examples/util/zipf/zipf.o 00:01:42.289 CC test/event/reactor/reactor.o 00:01:42.289 CXX test/cpp_headers/nvmf_cmd.o 00:01:42.289 CC test/event/scheduler/scheduler.o 00:01:42.554 LINK nvmf_tgt 00:01:42.554 LINK rpc_client_test 00:01:42.554 LINK iscsi_tgt 00:01:42.554 LINK spdk_nvme_discover 00:01:42.554 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:01:42.554 CC test/env/mem_callbacks/mem_callbacks.o 00:01:42.554 LINK interrupt_tgt 00:01:42.815 LINK spdk_tgt 00:01:42.815 LINK pmr_persistence 00:01:42.815 LINK cmb_copy 00:01:42.815 CC test/lvol/esnap/esnap.o 00:01:42.815 LINK reactor 00:01:42.815 LINK lsvmd 00:01:42.815 LINK bdev_svc 00:01:42.815 LINK reactor_perf 00:01:42.815 LINK vhost 00:01:42.815 LINK led 00:01:42.815 LINK fused_ordering 00:01:42.815 LINK hello_world 00:01:43.076 LINK ioat_perf 00:01:43.076 LINK stub 00:01:43.076 LINK event_perf 00:01:43.076 CXX test/cpp_headers/nvmf_fc_spec.o 00:01:43.076 LINK hello_bdev 00:01:43.076 CXX test/cpp_headers/nvmf.o 00:01:43.076 LINK histogram_perf 00:01:43.076 LINK err_injection 00:01:43.076 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:01:43.076 CXX test/cpp_headers/nvmf_spec.o 00:01:43.076 LINK scheduler 00:01:43.076 CXX test/cpp_headers/nvmf_transport.o 00:01:43.076 CXX test/cpp_headers/opal.o 00:01:43.076 LINK simple_copy 00:01:43.076 CXX test/cpp_headers/opal_spec.o 00:01:43.076 LINK reset 00:01:43.076 CXX test/cpp_headers/pci_ids.o 00:01:43.076 CXX test/cpp_headers/pipe.o 00:01:43.076 LINK spdk_dd 00:01:43.076 CXX test/cpp_headers/queue.o 00:01:43.076 CXX test/cpp_headers/reduce.o 00:01:43.076 CXX test/cpp_headers/scheduler.o 00:01:43.076 LINK hello_blob 00:01:43.076 CXX test/cpp_headers/rpc.o 00:01:43.076 LINK reserve 00:01:43.076 CXX test/cpp_headers/scsi.o 00:01:43.076 CXX test/cpp_headers/scsi_spec.o 00:01:43.076 CXX test/cpp_headers/sock.o 00:01:43.076 CXX test/cpp_headers/stdinc.o 00:01:43.076 CXX test/cpp_headers/string.o 00:01:43.076 CXX test/cpp_headers/thread.o 00:01:43.076 CXX test/cpp_headers/trace.o 00:01:43.076 LINK spdk_trace_record 00:01:43.076 CXX test/cpp_headers/trace_parser.o 00:01:43.076 LINK zipf 00:01:43.076 CXX test/cpp_headers/tree.o 00:01:43.076 CXX test/cpp_headers/ublk.o 00:01:43.076 LINK sgl 00:01:43.076 CXX test/cpp_headers/uuid.o 00:01:43.076 CXX test/cpp_headers/util.o 00:01:43.076 CXX test/cpp_headers/version.o 00:01:43.076 LINK vtophys 00:01:43.076 CXX test/cpp_headers/vfio_user_pci.o 00:01:43.076 CXX test/cpp_headers/vfio_user_spec.o 00:01:43.076 CXX test/cpp_headers/vhost.o 00:01:43.076 CXX test/cpp_headers/vmd.o 00:01:43.076 CXX test/cpp_headers/xor.o 00:01:43.076 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:01:43.076 CXX test/cpp_headers/zipf.o 00:01:43.076 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:01:43.076 LINK jsoncat 00:01:43.076 LINK idxd_perf 00:01:43.335 LINK boot_partition 00:01:43.335 LINK startup 00:01:43.335 LINK mkfs 00:01:43.335 LINK connect_stress 00:01:43.335 LINK test_dma 00:01:43.335 LINK nvme_compliance 00:01:43.335 LINK app_repeat 00:01:43.335 LINK nvme_dp 00:01:43.335 LINK poller_perf 00:01:43.335 LINK fdp 00:01:43.335 LINK doorbell_aers 00:01:43.335 LINK abort 00:01:43.335 LINK dif 00:01:43.335 LINK env_dpdk_post_init 00:01:43.335 LINK thread 00:01:43.593 LINK spdk_nvme 00:01:43.593 LINK verify 00:01:43.593 LINK pci_ut 00:01:43.593 LINK reconnect 00:01:43.593 LINK nvmf 00:01:43.593 LINK hotplug 00:01:43.593 LINK hello_sock 00:01:43.593 LINK overhead 00:01:43.593 LINK aer 00:01:43.593 LINK accel_perf 00:01:43.593 LINK nvme_fuzz 00:01:43.593 LINK arbitration 00:01:43.593 LINK blobcli 00:01:43.593 LINK spdk_top 00:01:43.850 LINK spdk_trace 00:01:43.850 LINK bdevio 00:01:43.850 LINK spdk_bdev 00:01:43.850 LINK vhost_fuzz 00:01:43.850 LINK nvme_manage 00:01:43.850 LINK mem_callbacks 00:01:44.108 LINK memory_ut 00:01:44.108 LINK spdk_nvme_perf 00:01:44.108 LINK spdk_nvme_identify 00:01:44.108 LINK bdevperf 00:01:44.108 LINK cuse 00:01:44.674 LINK iscsi_fuzz 00:01:46.573 LINK esnap 00:01:46.831 00:01:46.831 real 0m34.284s 00:01:46.831 user 5m34.465s 00:01:46.831 sys 4m56.386s 00:01:46.831 03:59:01 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:01:46.831 03:59:01 -- common/autotest_common.sh@10 -- $ set +x 00:01:46.831 ************************************ 00:01:46.831 END TEST make 00:01:46.831 ************************************ 00:01:46.831 03:59:01 -- spdk/autotest.sh@25 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/common.sh 00:01:46.831 03:59:01 -- nvmf/common.sh@7 -- # uname -s 00:01:46.831 03:59:01 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:01:46.831 03:59:01 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:01:46.831 03:59:01 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:01:46.831 03:59:01 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:01:46.831 03:59:01 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:01:46.831 03:59:01 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:01:46.831 03:59:01 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:01:46.831 03:59:01 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:01:46.831 03:59:01 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:01:46.831 03:59:01 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:01:46.831 03:59:01 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80ef6226-405e-ee11-906e-a4bf01973fda 00:01:46.831 03:59:01 -- nvmf/common.sh@18 -- # NVME_HOSTID=80ef6226-405e-ee11-906e-a4bf01973fda 00:01:46.831 03:59:01 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:01:46.831 03:59:01 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:01:46.831 03:59:01 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:01:46.831 03:59:01 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/common.sh 00:01:46.831 03:59:01 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:01:46.831 03:59:01 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:01:46.831 03:59:01 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:01:46.831 03:59:01 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:46.831 03:59:01 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:46.831 03:59:01 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:46.831 03:59:01 -- paths/export.sh@5 -- # export PATH 00:01:46.831 03:59:01 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:46.831 03:59:01 -- nvmf/common.sh@46 -- # : 0 00:01:46.831 03:59:01 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:01:46.831 03:59:01 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:01:46.831 03:59:01 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:01:46.831 03:59:01 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:01:46.831 03:59:01 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:01:46.831 03:59:01 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:01:46.831 03:59:01 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:01:46.831 03:59:01 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:01:46.831 03:59:01 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:01:46.831 03:59:01 -- spdk/autotest.sh@32 -- # uname -s 00:01:46.831 03:59:01 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:01:46.831 03:59:01 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:01:46.831 03:59:01 -- spdk/autotest.sh@34 -- # mkdir -p /var/jenkins/workspace/dsa-phy-autotest/spdk/../output/coredumps 00:01:46.831 03:59:01 -- spdk/autotest.sh@39 -- # echo '|/var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/core-collector.sh %P %s %t' 00:01:46.831 03:59:01 -- spdk/autotest.sh@40 -- # echo /var/jenkins/workspace/dsa-phy-autotest/spdk/../output/coredumps 00:01:46.831 03:59:01 -- spdk/autotest.sh@44 -- # modprobe nbd 00:01:46.831 03:59:01 -- spdk/autotest.sh@46 -- # type -P udevadm 00:01:46.831 03:59:01 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:01:46.831 03:59:01 -- spdk/autotest.sh@48 -- # udevadm_pid=3732748 00:01:46.831 03:59:01 -- spdk/autotest.sh@51 -- # mkdir -p /var/jenkins/workspace/dsa-phy-autotest/spdk/../output/power 00:01:46.831 03:59:01 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:01:46.831 03:59:01 -- spdk/autotest.sh@54 -- # echo 3732750 00:01:46.831 03:59:01 -- spdk/autotest.sh@56 -- # echo 3732751 00:01:46.831 03:59:01 -- spdk/autotest.sh@58 -- # [[ ............................... != QEMU ]] 00:01:46.831 03:59:01 -- spdk/autotest.sh@53 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/dsa-phy-autotest/spdk/../output/power 00:01:46.831 03:59:01 -- spdk/autotest.sh@60 -- # echo 3732752 00:01:46.831 03:59:01 -- spdk/autotest.sh@62 -- # echo 3732753 00:01:46.831 03:59:01 -- spdk/autotest.sh@66 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:01:46.831 03:59:01 -- spdk/autotest.sh@68 -- # timing_enter autotest 00:01:46.831 03:59:01 -- common/autotest_common.sh@712 -- # xtrace_disable 00:01:46.831 03:59:01 -- common/autotest_common.sh@10 -- # set +x 00:01:46.831 03:59:01 -- spdk/autotest.sh@55 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/dsa-phy-autotest/spdk/../output/power 00:01:46.831 03:59:01 -- spdk/autotest.sh@59 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/dsa-phy-autotest/spdk/../output/power -l 00:01:46.831 03:59:01 -- spdk/autotest.sh@61 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/dsa-phy-autotest/spdk/../output/power -l 00:01:46.831 03:59:01 -- spdk/autotest.sh@70 -- # create_test_list 00:01:46.831 03:59:01 -- common/autotest_common.sh@736 -- # xtrace_disable 00:01:47.089 03:59:01 -- common/autotest_common.sh@10 -- # set +x 00:01:47.089 Redirecting to /var/jenkins/workspace/dsa-phy-autotest/spdk/../output/power/collect-bmc-pm.bmc.pm.log 00:01:47.089 03:59:01 -- spdk/autotest.sh@72 -- # dirname /var/jenkins/workspace/dsa-phy-autotest/spdk/autotest.sh 00:01:47.089 03:59:01 -- spdk/autotest.sh@72 -- # readlink -f /var/jenkins/workspace/dsa-phy-autotest/spdk 00:01:47.089 Redirecting to /var/jenkins/workspace/dsa-phy-autotest/spdk/../output/power/collect-cpu-temp.pm.log 00:01:47.089 03:59:01 -- spdk/autotest.sh@72 -- # src=/var/jenkins/workspace/dsa-phy-autotest/spdk 00:01:47.089 03:59:01 -- spdk/autotest.sh@73 -- # out=/var/jenkins/workspace/dsa-phy-autotest/spdk/../output 00:01:47.089 03:59:01 -- spdk/autotest.sh@74 -- # cd /var/jenkins/workspace/dsa-phy-autotest/spdk 00:01:47.089 03:59:01 -- spdk/autotest.sh@76 -- # freebsd_update_contigmem_mod 00:01:47.089 03:59:01 -- common/autotest_common.sh@1440 -- # uname 00:01:47.089 03:59:01 -- common/autotest_common.sh@1440 -- # '[' Linux = FreeBSD ']' 00:01:47.089 03:59:01 -- spdk/autotest.sh@77 -- # freebsd_set_maxsock_buf 00:01:47.089 03:59:01 -- common/autotest_common.sh@1460 -- # uname 00:01:47.089 03:59:01 -- common/autotest_common.sh@1460 -- # [[ Linux = FreeBSD ]] 00:01:47.089 03:59:01 -- spdk/autotest.sh@82 -- # grep CC_TYPE mk/cc.mk 00:01:47.089 03:59:01 -- spdk/autotest.sh@82 -- # CC_TYPE=CC_TYPE=gcc 00:01:47.089 03:59:01 -- spdk/autotest.sh@83 -- # hash lcov 00:01:47.089 03:59:01 -- spdk/autotest.sh@83 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:01:47.089 03:59:01 -- spdk/autotest.sh@91 -- # export 'LCOV_OPTS= 00:01:47.089 --rc lcov_branch_coverage=1 00:01:47.089 --rc lcov_function_coverage=1 00:01:47.089 --rc genhtml_branch_coverage=1 00:01:47.089 --rc genhtml_function_coverage=1 00:01:47.089 --rc genhtml_legend=1 00:01:47.089 --rc geninfo_all_blocks=1 00:01:47.089 ' 00:01:47.089 03:59:01 -- spdk/autotest.sh@91 -- # LCOV_OPTS=' 00:01:47.089 --rc lcov_branch_coverage=1 00:01:47.089 --rc lcov_function_coverage=1 00:01:47.089 --rc genhtml_branch_coverage=1 00:01:47.089 --rc genhtml_function_coverage=1 00:01:47.089 --rc genhtml_legend=1 00:01:47.089 --rc geninfo_all_blocks=1 00:01:47.089 ' 00:01:47.089 03:59:01 -- spdk/autotest.sh@92 -- # export 'LCOV=lcov 00:01:47.089 --rc lcov_branch_coverage=1 00:01:47.089 --rc lcov_function_coverage=1 00:01:47.089 --rc genhtml_branch_coverage=1 00:01:47.089 --rc genhtml_function_coverage=1 00:01:47.089 --rc genhtml_legend=1 00:01:47.089 --rc geninfo_all_blocks=1 00:01:47.089 --no-external' 00:01:47.089 03:59:01 -- spdk/autotest.sh@92 -- # LCOV='lcov 00:01:47.089 --rc lcov_branch_coverage=1 00:01:47.089 --rc lcov_function_coverage=1 00:01:47.089 --rc genhtml_branch_coverage=1 00:01:47.089 --rc genhtml_function_coverage=1 00:01:47.089 --rc genhtml_legend=1 00:01:47.089 --rc geninfo_all_blocks=1 00:01:47.089 --no-external' 00:01:47.089 03:59:01 -- spdk/autotest.sh@94 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -v 00:01:47.089 lcov: LCOV version 1.14 00:01:47.089 03:59:01 -- spdk/autotest.sh@96 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -i -t Baseline -d /var/jenkins/workspace/dsa-phy-autotest/spdk -o /var/jenkins/workspace/dsa-phy-autotest/spdk/../output/cov_base.info 00:01:53.648 /var/jenkins/workspace/dsa-phy-autotest/spdk/lib/ftl/upgrade/ftl_chunk_upgrade.gcno:no functions found 00:01:53.648 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/lib/ftl/upgrade/ftl_chunk_upgrade.gcno 00:01:53.648 /var/jenkins/workspace/dsa-phy-autotest/spdk/lib/ftl/upgrade/ftl_p2l_upgrade.gcno:no functions found 00:01:53.648 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/lib/ftl/upgrade/ftl_p2l_upgrade.gcno 00:01:53.648 /var/jenkins/workspace/dsa-phy-autotest/spdk/lib/ftl/upgrade/ftl_band_upgrade.gcno:no functions found 00:01:53.648 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/lib/ftl/upgrade/ftl_band_upgrade.gcno 00:02:01.781 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/accel_module.gcno:no functions found 00:02:01.781 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/accel_module.gcno 00:02:01.781 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/assert.gcno:no functions found 00:02:01.781 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/assert.gcno 00:02:01.781 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/barrier.gcno:no functions found 00:02:01.781 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/barrier.gcno 00:02:01.781 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/accel.gcno:no functions found 00:02:01.781 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/accel.gcno 00:02:01.781 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/base64.gcno:no functions found 00:02:01.781 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/base64.gcno 00:02:01.781 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/bdev.gcno:no functions found 00:02:01.781 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/bdev.gcno 00:02:01.781 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/bdev_zone.gcno:no functions found 00:02:01.781 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/bdev_zone.gcno 00:02:01.781 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/bdev_module.gcno:no functions found 00:02:01.781 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/bdev_module.gcno 00:02:01.781 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/bit_pool.gcno:no functions found 00:02:01.781 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/bit_pool.gcno 00:02:01.781 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/blob_bdev.gcno:no functions found 00:02:01.781 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/blob_bdev.gcno 00:02:01.781 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/conf.gcno:no functions found 00:02:01.781 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/conf.gcno 00:02:01.781 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/blobfs_bdev.gcno:no functions found 00:02:01.781 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/blobfs_bdev.gcno 00:02:01.781 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/blobfs.gcno:no functions found 00:02:01.781 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/blobfs.gcno 00:02:01.781 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/bit_array.gcno:no functions found 00:02:01.781 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/bit_array.gcno 00:02:01.781 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/cpuset.gcno:no functions found 00:02:01.781 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/cpuset.gcno 00:02:01.781 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/crc32.gcno:no functions found 00:02:01.781 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/crc32.gcno 00:02:01.781 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/crc64.gcno:no functions found 00:02:01.781 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/crc64.gcno 00:02:01.781 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/event.gcno:no functions found 00:02:01.781 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/event.gcno 00:02:01.781 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/dif.gcno:no functions found 00:02:01.781 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/dif.gcno 00:02:01.781 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/ftl.gcno:no functions found 00:02:01.781 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/ftl.gcno 00:02:01.781 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/blob.gcno:no functions found 00:02:01.781 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/blob.gcno 00:02:01.781 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/crc16.gcno:no functions found 00:02:01.781 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/crc16.gcno 00:02:01.781 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/config.gcno:no functions found 00:02:01.781 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/config.gcno 00:02:01.781 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/dma.gcno:no functions found 00:02:01.781 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/dma.gcno 00:02:01.781 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/idxd_spec.gcno:no functions found 00:02:01.781 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/idxd_spec.gcno 00:02:01.781 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/env_dpdk.gcno:no functions found 00:02:01.781 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/env_dpdk.gcno 00:02:01.781 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/init.gcno:no functions found 00:02:01.781 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/init.gcno 00:02:01.781 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/fd.gcno:no functions found 00:02:01.781 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/fd.gcno 00:02:01.781 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/fd_group.gcno:no functions found 00:02:01.781 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/fd_group.gcno 00:02:01.781 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/idxd.gcno:no functions found 00:02:01.781 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/idxd.gcno 00:02:01.781 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/hexlify.gcno:no functions found 00:02:01.781 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/hexlify.gcno 00:02:01.781 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/endian.gcno:no functions found 00:02:01.781 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/endian.gcno 00:02:01.781 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/env.gcno:no functions found 00:02:01.781 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/env.gcno 00:02:01.781 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/ioat_spec.gcno:no functions found 00:02:01.781 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/ioat_spec.gcno 00:02:01.781 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/likely.gcno:no functions found 00:02:01.781 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/likely.gcno 00:02:01.781 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/file.gcno:no functions found 00:02:01.781 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/file.gcno 00:02:01.781 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/histogram_data.gcno:no functions found 00:02:01.781 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/histogram_data.gcno 00:02:01.781 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/log.gcno:no functions found 00:02:01.781 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/log.gcno 00:02:01.781 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/mmio.gcno:no functions found 00:02:01.781 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/mmio.gcno 00:02:01.781 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/iscsi_spec.gcno:no functions found 00:02:01.781 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/iscsi_spec.gcno 00:02:01.781 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/gpt_spec.gcno:no functions found 00:02:01.782 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/gpt_spec.gcno 00:02:01.782 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/ioat.gcno:no functions found 00:02:01.782 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/ioat.gcno 00:02:01.782 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/json.gcno:no functions found 00:02:01.782 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/json.gcno 00:02:01.782 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/nbd.gcno:no functions found 00:02:01.782 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/nbd.gcno 00:02:01.782 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/nvme.gcno:no functions found 00:02:01.782 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/nvme.gcno 00:02:01.782 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/nvme_ocssd.gcno:no functions found 00:02:01.782 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/nvme_ocssd.gcno 00:02:01.782 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/lvol.gcno:no functions found 00:02:01.782 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/lvol.gcno 00:02:01.782 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/memory.gcno:no functions found 00:02:01.782 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/memory.gcno 00:02:01.782 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/notify.gcno:no functions found 00:02:01.782 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/notify.gcno 00:02:01.782 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/jsonrpc.gcno:no functions found 00:02:01.782 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/jsonrpc.gcno 00:02:01.782 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/nvme_ocssd_spec.gcno:no functions found 00:02:01.782 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/nvme_ocssd_spec.gcno 00:02:01.782 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/nvme_intel.gcno:no functions found 00:02:01.782 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/nvme_intel.gcno 00:02:01.782 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/nvme_spec.gcno:no functions found 00:02:01.782 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/nvme_spec.gcno 00:02:01.782 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/nvme_zns.gcno:no functions found 00:02:01.782 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/nvme_zns.gcno 00:02:01.782 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/nvmf_cmd.gcno:no functions found 00:02:01.782 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/nvmf_cmd.gcno 00:02:01.782 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/nvmf_fc_spec.gcno:no functions found 00:02:01.782 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/nvmf_fc_spec.gcno 00:02:01.782 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/nvmf.gcno:no functions found 00:02:01.782 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/nvmf.gcno 00:02:01.782 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/nvmf_spec.gcno:no functions found 00:02:01.782 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/nvmf_spec.gcno 00:02:01.782 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/opal.gcno:no functions found 00:02:01.782 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/opal.gcno 00:02:01.782 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/pci_ids.gcno:no functions found 00:02:01.782 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/pci_ids.gcno 00:02:01.782 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/opal_spec.gcno:no functions found 00:02:01.782 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/opal_spec.gcno 00:02:01.782 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/pipe.gcno:no functions found 00:02:01.782 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/pipe.gcno 00:02:01.782 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/queue.gcno:no functions found 00:02:01.782 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/queue.gcno 00:02:01.782 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/nvmf_transport.gcno:no functions found 00:02:01.782 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/nvmf_transport.gcno 00:02:01.782 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/reduce.gcno:no functions found 00:02:01.782 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/reduce.gcno 00:02:01.782 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/scheduler.gcno:no functions found 00:02:01.782 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/scheduler.gcno 00:02:01.782 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/rpc.gcno:no functions found 00:02:01.782 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/rpc.gcno 00:02:01.782 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/scsi.gcno:no functions found 00:02:01.782 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/scsi.gcno 00:02:01.782 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/scsi_spec.gcno:no functions found 00:02:01.782 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/scsi_spec.gcno 00:02:01.782 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/sock.gcno:no functions found 00:02:01.782 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/sock.gcno 00:02:01.782 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/stdinc.gcno:no functions found 00:02:01.782 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/stdinc.gcno 00:02:01.782 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/string.gcno:no functions found 00:02:01.782 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/string.gcno 00:02:01.782 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/thread.gcno:no functions found 00:02:01.782 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/thread.gcno 00:02:01.782 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/trace.gcno:no functions found 00:02:01.782 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/trace.gcno 00:02:01.782 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/uuid.gcno:no functions found 00:02:01.782 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/uuid.gcno 00:02:01.782 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/tree.gcno:no functions found 00:02:01.782 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/tree.gcno 00:02:01.782 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/ublk.gcno:no functions found 00:02:01.782 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/ublk.gcno 00:02:01.782 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/util.gcno:no functions found 00:02:01.782 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/util.gcno 00:02:01.782 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/version.gcno:no functions found 00:02:01.782 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/version.gcno 00:02:01.782 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/trace_parser.gcno:no functions found 00:02:01.782 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/trace_parser.gcno 00:02:01.782 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/vhost.gcno:no functions found 00:02:01.782 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/vhost.gcno 00:02:01.782 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/vmd.gcno:no functions found 00:02:01.782 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/vmd.gcno 00:02:01.782 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/xor.gcno:no functions found 00:02:01.782 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/xor.gcno 00:02:01.782 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/zipf.gcno:no functions found 00:02:01.782 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/zipf.gcno 00:02:01.782 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/vfio_user_pci.gcno:no functions found 00:02:01.782 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/vfio_user_pci.gcno 00:02:01.782 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/vfio_user_spec.gcno:no functions found 00:02:01.782 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/vfio_user_spec.gcno 00:02:02.042 03:59:16 -- spdk/autotest.sh@100 -- # timing_enter pre_cleanup 00:02:02.042 03:59:16 -- common/autotest_common.sh@712 -- # xtrace_disable 00:02:02.042 03:59:16 -- common/autotest_common.sh@10 -- # set +x 00:02:02.042 03:59:16 -- spdk/autotest.sh@102 -- # rm -f 00:02:02.042 03:59:16 -- spdk/autotest.sh@105 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/setup.sh reset 00:02:04.598 0000:c9:00.0 (8086 0a54): Already using the nvme driver 00:02:04.598 0000:74:02.0 (8086 0cfe): Already using the idxd driver 00:02:04.598 0000:f1:02.0 (8086 0cfe): Already using the idxd driver 00:02:04.598 0000:79:02.0 (8086 0cfe): Already using the idxd driver 00:02:04.598 0000:6f:01.0 (8086 0b25): Already using the idxd driver 00:02:04.598 0000:6f:02.0 (8086 0cfe): Already using the idxd driver 00:02:04.598 0000:f6:01.0 (8086 0b25): Already using the idxd driver 00:02:04.598 0000:f6:02.0 (8086 0cfe): Already using the idxd driver 00:02:04.858 0000:74:01.0 (8086 0b25): Already using the idxd driver 00:02:04.858 0000:6a:02.0 (8086 0cfe): Already using the idxd driver 00:02:04.858 0000:79:01.0 (8086 0b25): Already using the idxd driver 00:02:04.858 0000:ec:01.0 (8086 0b25): Already using the idxd driver 00:02:04.858 0000:6a:01.0 (8086 0b25): Already using the idxd driver 00:02:04.858 0000:ca:00.0 (8086 0a54): Already using the nvme driver 00:02:04.858 0000:ec:02.0 (8086 0cfe): Already using the idxd driver 00:02:04.858 0000:e7:01.0 (8086 0b25): Already using the idxd driver 00:02:04.858 0000:e7:02.0 (8086 0cfe): Already using the idxd driver 00:02:04.858 0000:f1:01.0 (8086 0b25): Already using the idxd driver 00:02:05.119 03:59:19 -- spdk/autotest.sh@107 -- # get_zoned_devs 00:02:05.119 03:59:19 -- common/autotest_common.sh@1654 -- # zoned_devs=() 00:02:05.119 03:59:19 -- common/autotest_common.sh@1654 -- # local -gA zoned_devs 00:02:05.119 03:59:19 -- common/autotest_common.sh@1655 -- # local nvme bdf 00:02:05.119 03:59:19 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:02:05.119 03:59:19 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme0n1 00:02:05.119 03:59:19 -- common/autotest_common.sh@1647 -- # local device=nvme0n1 00:02:05.119 03:59:19 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:02:05.119 03:59:19 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:02:05.119 03:59:19 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:02:05.119 03:59:19 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme1n1 00:02:05.119 03:59:19 -- common/autotest_common.sh@1647 -- # local device=nvme1n1 00:02:05.119 03:59:19 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:02:05.119 03:59:19 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:02:05.119 03:59:19 -- spdk/autotest.sh@109 -- # (( 0 > 0 )) 00:02:05.119 03:59:19 -- spdk/autotest.sh@121 -- # ls /dev/nvme0n1 /dev/nvme1n1 00:02:05.119 03:59:19 -- spdk/autotest.sh@121 -- # grep -v p 00:02:05.119 03:59:19 -- spdk/autotest.sh@121 -- # for dev in $(ls /dev/nvme*n* | grep -v p || true) 00:02:05.119 03:59:19 -- spdk/autotest.sh@123 -- # [[ -z '' ]] 00:02:05.119 03:59:19 -- spdk/autotest.sh@124 -- # block_in_use /dev/nvme0n1 00:02:05.119 03:59:19 -- scripts/common.sh@380 -- # local block=/dev/nvme0n1 pt 00:02:05.119 03:59:19 -- scripts/common.sh@389 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:02:05.119 No valid GPT data, bailing 00:02:05.119 03:59:19 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:02:05.119 03:59:19 -- scripts/common.sh@393 -- # pt= 00:02:05.119 03:59:19 -- scripts/common.sh@394 -- # return 1 00:02:05.119 03:59:19 -- spdk/autotest.sh@125 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:02:05.119 1+0 records in 00:02:05.119 1+0 records out 00:02:05.119 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00338618 s, 310 MB/s 00:02:05.119 03:59:19 -- spdk/autotest.sh@121 -- # for dev in $(ls /dev/nvme*n* | grep -v p || true) 00:02:05.119 03:59:19 -- spdk/autotest.sh@123 -- # [[ -z '' ]] 00:02:05.119 03:59:19 -- spdk/autotest.sh@124 -- # block_in_use /dev/nvme1n1 00:02:05.119 03:59:19 -- scripts/common.sh@380 -- # local block=/dev/nvme1n1 pt 00:02:05.119 03:59:19 -- scripts/common.sh@389 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme1n1 00:02:05.119 No valid GPT data, bailing 00:02:05.119 03:59:19 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:02:05.119 03:59:19 -- scripts/common.sh@393 -- # pt= 00:02:05.119 03:59:19 -- scripts/common.sh@394 -- # return 1 00:02:05.119 03:59:19 -- spdk/autotest.sh@125 -- # dd if=/dev/zero of=/dev/nvme1n1 bs=1M count=1 00:02:05.119 1+0 records in 00:02:05.119 1+0 records out 00:02:05.119 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00209347 s, 501 MB/s 00:02:05.119 03:59:19 -- spdk/autotest.sh@129 -- # sync 00:02:05.119 03:59:19 -- spdk/autotest.sh@131 -- # xtrace_disable_per_cmd reap_spdk_processes 00:02:05.119 03:59:19 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:02:05.119 03:59:19 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:02:10.447 03:59:24 -- spdk/autotest.sh@135 -- # uname -s 00:02:10.448 03:59:24 -- spdk/autotest.sh@135 -- # '[' Linux = Linux ']' 00:02:10.448 03:59:24 -- spdk/autotest.sh@136 -- # run_test setup.sh /var/jenkins/workspace/dsa-phy-autotest/spdk/test/setup/test-setup.sh 00:02:10.448 03:59:24 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:02:10.448 03:59:24 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:02:10.448 03:59:24 -- common/autotest_common.sh@10 -- # set +x 00:02:10.448 ************************************ 00:02:10.448 START TEST setup.sh 00:02:10.448 ************************************ 00:02:10.448 03:59:24 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/setup/test-setup.sh 00:02:10.448 * Looking for test storage... 00:02:10.448 * Found test storage at /var/jenkins/workspace/dsa-phy-autotest/spdk/test/setup 00:02:10.448 03:59:24 -- setup/test-setup.sh@10 -- # uname -s 00:02:10.448 03:59:24 -- setup/test-setup.sh@10 -- # [[ Linux == Linux ]] 00:02:10.448 03:59:24 -- setup/test-setup.sh@12 -- # run_test acl /var/jenkins/workspace/dsa-phy-autotest/spdk/test/setup/acl.sh 00:02:10.448 03:59:24 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:02:10.448 03:59:24 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:02:10.448 03:59:24 -- common/autotest_common.sh@10 -- # set +x 00:02:10.448 ************************************ 00:02:10.448 START TEST acl 00:02:10.448 ************************************ 00:02:10.448 03:59:24 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/setup/acl.sh 00:02:10.448 * Looking for test storage... 00:02:10.448 * Found test storage at /var/jenkins/workspace/dsa-phy-autotest/spdk/test/setup 00:02:10.448 03:59:24 -- setup/acl.sh@10 -- # get_zoned_devs 00:02:10.448 03:59:24 -- common/autotest_common.sh@1654 -- # zoned_devs=() 00:02:10.448 03:59:24 -- common/autotest_common.sh@1654 -- # local -gA zoned_devs 00:02:10.448 03:59:24 -- common/autotest_common.sh@1655 -- # local nvme bdf 00:02:10.448 03:59:24 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:02:10.448 03:59:24 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme0n1 00:02:10.448 03:59:24 -- common/autotest_common.sh@1647 -- # local device=nvme0n1 00:02:10.448 03:59:24 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:02:10.448 03:59:24 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:02:10.448 03:59:24 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:02:10.448 03:59:24 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme1n1 00:02:10.448 03:59:24 -- common/autotest_common.sh@1647 -- # local device=nvme1n1 00:02:10.448 03:59:24 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:02:10.448 03:59:24 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:02:10.448 03:59:24 -- setup/acl.sh@12 -- # devs=() 00:02:10.448 03:59:24 -- setup/acl.sh@12 -- # declare -a devs 00:02:10.448 03:59:24 -- setup/acl.sh@13 -- # drivers=() 00:02:10.448 03:59:24 -- setup/acl.sh@13 -- # declare -A drivers 00:02:10.448 03:59:24 -- setup/acl.sh@51 -- # setup reset 00:02:10.448 03:59:24 -- setup/common.sh@9 -- # [[ reset == output ]] 00:02:10.448 03:59:24 -- setup/common.sh@12 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/setup.sh reset 00:02:12.993 03:59:27 -- setup/acl.sh@52 -- # collect_setup_devs 00:02:12.993 03:59:27 -- setup/acl.sh@16 -- # local dev driver 00:02:12.993 03:59:27 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:12.993 03:59:27 -- setup/acl.sh@15 -- # setup output status 00:02:12.993 03:59:27 -- setup/common.sh@9 -- # [[ output == output ]] 00:02:12.993 03:59:27 -- setup/common.sh@10 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/setup.sh status 00:02:15.536 Hugepages 00:02:15.536 node hugesize free / total 00:02:15.536 03:59:29 -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:02:15.536 03:59:29 -- setup/acl.sh@19 -- # continue 00:02:15.536 03:59:29 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:15.536 03:59:29 -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:02:15.536 03:59:29 -- setup/acl.sh@19 -- # continue 00:02:15.536 03:59:29 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:15.536 03:59:29 -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:02:15.536 03:59:29 -- setup/acl.sh@19 -- # continue 00:02:15.536 03:59:29 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:15.536 00:02:15.536 Type BDF Vendor Device NUMA Driver Device Block devices 00:02:15.536 03:59:29 -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:02:15.536 03:59:29 -- setup/acl.sh@19 -- # continue 00:02:15.536 03:59:29 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:15.536 03:59:29 -- setup/acl.sh@19 -- # [[ 0000:6a:01.0 == *:*:*.* ]] 00:02:15.536 03:59:29 -- setup/acl.sh@20 -- # [[ idxd == nvme ]] 00:02:15.536 03:59:29 -- setup/acl.sh@20 -- # continue 00:02:15.536 03:59:29 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:15.536 03:59:29 -- setup/acl.sh@19 -- # [[ 0000:6a:02.0 == *:*:*.* ]] 00:02:15.536 03:59:29 -- setup/acl.sh@20 -- # [[ idxd == nvme ]] 00:02:15.536 03:59:29 -- setup/acl.sh@20 -- # continue 00:02:15.536 03:59:29 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:15.536 03:59:29 -- setup/acl.sh@19 -- # [[ 0000:6f:01.0 == *:*:*.* ]] 00:02:15.536 03:59:29 -- setup/acl.sh@20 -- # [[ idxd == nvme ]] 00:02:15.536 03:59:29 -- setup/acl.sh@20 -- # continue 00:02:15.536 03:59:29 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:15.536 03:59:29 -- setup/acl.sh@19 -- # [[ 0000:6f:02.0 == *:*:*.* ]] 00:02:15.536 03:59:29 -- setup/acl.sh@20 -- # [[ idxd == nvme ]] 00:02:15.536 03:59:29 -- setup/acl.sh@20 -- # continue 00:02:15.536 03:59:29 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:15.536 03:59:29 -- setup/acl.sh@19 -- # [[ 0000:74:01.0 == *:*:*.* ]] 00:02:15.536 03:59:29 -- setup/acl.sh@20 -- # [[ idxd == nvme ]] 00:02:15.536 03:59:29 -- setup/acl.sh@20 -- # continue 00:02:15.536 03:59:29 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:15.536 03:59:29 -- setup/acl.sh@19 -- # [[ 0000:74:02.0 == *:*:*.* ]] 00:02:15.536 03:59:29 -- setup/acl.sh@20 -- # [[ idxd == nvme ]] 00:02:15.536 03:59:29 -- setup/acl.sh@20 -- # continue 00:02:15.536 03:59:29 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:15.536 03:59:29 -- setup/acl.sh@19 -- # [[ 0000:79:01.0 == *:*:*.* ]] 00:02:15.536 03:59:29 -- setup/acl.sh@20 -- # [[ idxd == nvme ]] 00:02:15.536 03:59:29 -- setup/acl.sh@20 -- # continue 00:02:15.536 03:59:29 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:15.536 03:59:29 -- setup/acl.sh@19 -- # [[ 0000:79:02.0 == *:*:*.* ]] 00:02:15.536 03:59:29 -- setup/acl.sh@20 -- # [[ idxd == nvme ]] 00:02:15.536 03:59:29 -- setup/acl.sh@20 -- # continue 00:02:15.536 03:59:29 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:15.536 03:59:29 -- setup/acl.sh@19 -- # [[ 0000:c9:00.0 == *:*:*.* ]] 00:02:15.536 03:59:29 -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:02:15.536 03:59:29 -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\c\9\:\0\0\.\0* ]] 00:02:15.536 03:59:29 -- setup/acl.sh@22 -- # devs+=("$dev") 00:02:15.536 03:59:29 -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:02:15.536 03:59:29 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:15.536 03:59:29 -- setup/acl.sh@19 -- # [[ 0000:ca:00.0 == *:*:*.* ]] 00:02:15.536 03:59:29 -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:02:15.536 03:59:29 -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\c\a\:\0\0\.\0* ]] 00:02:15.536 03:59:29 -- setup/acl.sh@22 -- # devs+=("$dev") 00:02:15.536 03:59:29 -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:02:15.536 03:59:29 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:15.536 03:59:29 -- setup/acl.sh@19 -- # [[ 0000:e7:01.0 == *:*:*.* ]] 00:02:15.536 03:59:29 -- setup/acl.sh@20 -- # [[ idxd == nvme ]] 00:02:15.536 03:59:29 -- setup/acl.sh@20 -- # continue 00:02:15.536 03:59:29 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:15.536 03:59:29 -- setup/acl.sh@19 -- # [[ 0000:e7:02.0 == *:*:*.* ]] 00:02:15.536 03:59:29 -- setup/acl.sh@20 -- # [[ idxd == nvme ]] 00:02:15.536 03:59:29 -- setup/acl.sh@20 -- # continue 00:02:15.536 03:59:29 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:15.536 03:59:29 -- setup/acl.sh@19 -- # [[ 0000:ec:01.0 == *:*:*.* ]] 00:02:15.536 03:59:29 -- setup/acl.sh@20 -- # [[ idxd == nvme ]] 00:02:15.536 03:59:29 -- setup/acl.sh@20 -- # continue 00:02:15.536 03:59:29 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:15.536 03:59:29 -- setup/acl.sh@19 -- # [[ 0000:ec:02.0 == *:*:*.* ]] 00:02:15.536 03:59:29 -- setup/acl.sh@20 -- # [[ idxd == nvme ]] 00:02:15.536 03:59:29 -- setup/acl.sh@20 -- # continue 00:02:15.536 03:59:29 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:15.536 03:59:29 -- setup/acl.sh@19 -- # [[ 0000:f1:01.0 == *:*:*.* ]] 00:02:15.536 03:59:29 -- setup/acl.sh@20 -- # [[ idxd == nvme ]] 00:02:15.536 03:59:29 -- setup/acl.sh@20 -- # continue 00:02:15.536 03:59:29 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:15.536 03:59:29 -- setup/acl.sh@19 -- # [[ 0000:f1:02.0 == *:*:*.* ]] 00:02:15.536 03:59:29 -- setup/acl.sh@20 -- # [[ idxd == nvme ]] 00:02:15.536 03:59:29 -- setup/acl.sh@20 -- # continue 00:02:15.536 03:59:29 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:15.536 03:59:29 -- setup/acl.sh@19 -- # [[ 0000:f6:01.0 == *:*:*.* ]] 00:02:15.536 03:59:29 -- setup/acl.sh@20 -- # [[ idxd == nvme ]] 00:02:15.536 03:59:29 -- setup/acl.sh@20 -- # continue 00:02:15.536 03:59:29 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:15.536 03:59:29 -- setup/acl.sh@19 -- # [[ 0000:f6:02.0 == *:*:*.* ]] 00:02:15.536 03:59:29 -- setup/acl.sh@20 -- # [[ idxd == nvme ]] 00:02:15.536 03:59:29 -- setup/acl.sh@20 -- # continue 00:02:15.536 03:59:29 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:15.536 03:59:29 -- setup/acl.sh@24 -- # (( 2 > 0 )) 00:02:15.536 03:59:29 -- setup/acl.sh@54 -- # run_test denied denied 00:02:15.536 03:59:29 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:02:15.536 03:59:29 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:02:15.536 03:59:29 -- common/autotest_common.sh@10 -- # set +x 00:02:15.536 ************************************ 00:02:15.536 START TEST denied 00:02:15.536 ************************************ 00:02:15.536 03:59:29 -- common/autotest_common.sh@1104 -- # denied 00:02:15.536 03:59:29 -- setup/acl.sh@38 -- # PCI_BLOCKED=' 0000:c9:00.0' 00:02:15.536 03:59:29 -- setup/acl.sh@38 -- # setup output config 00:02:15.536 03:59:29 -- setup/common.sh@9 -- # [[ output == output ]] 00:02:15.537 03:59:29 -- setup/common.sh@10 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/setup.sh config 00:02:15.537 03:59:29 -- setup/acl.sh@39 -- # grep 'Skipping denied controller at 0000:c9:00.0' 00:02:20.825 0000:c9:00.0 (8086 0a54): Skipping denied controller at 0000:c9:00.0 00:02:20.825 03:59:35 -- setup/acl.sh@40 -- # verify 0000:c9:00.0 00:02:20.825 03:59:35 -- setup/acl.sh@28 -- # local dev driver 00:02:20.825 03:59:35 -- setup/acl.sh@30 -- # for dev in "$@" 00:02:20.825 03:59:35 -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:c9:00.0 ]] 00:02:20.825 03:59:35 -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:c9:00.0/driver 00:02:20.825 03:59:35 -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:02:20.825 03:59:35 -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:02:20.825 03:59:35 -- setup/acl.sh@41 -- # setup reset 00:02:20.825 03:59:35 -- setup/common.sh@9 -- # [[ reset == output ]] 00:02:20.825 03:59:35 -- setup/common.sh@12 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/setup.sh reset 00:02:25.027 00:02:25.027 real 0m9.186s 00:02:25.027 user 0m1.984s 00:02:25.027 sys 0m3.873s 00:02:25.027 03:59:39 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:02:25.027 03:59:39 -- common/autotest_common.sh@10 -- # set +x 00:02:25.027 ************************************ 00:02:25.027 END TEST denied 00:02:25.027 ************************************ 00:02:25.027 03:59:39 -- setup/acl.sh@55 -- # run_test allowed allowed 00:02:25.027 03:59:39 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:02:25.027 03:59:39 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:02:25.027 03:59:39 -- common/autotest_common.sh@10 -- # set +x 00:02:25.027 ************************************ 00:02:25.027 START TEST allowed 00:02:25.027 ************************************ 00:02:25.027 03:59:39 -- common/autotest_common.sh@1104 -- # allowed 00:02:25.027 03:59:39 -- setup/acl.sh@45 -- # PCI_ALLOWED=0000:c9:00.0 00:02:25.027 03:59:39 -- setup/acl.sh@45 -- # setup output config 00:02:25.027 03:59:39 -- setup/common.sh@9 -- # [[ output == output ]] 00:02:25.027 03:59:39 -- setup/acl.sh@46 -- # grep -E '0000:c9:00.0 .*: nvme -> .*' 00:02:25.027 03:59:39 -- setup/common.sh@10 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/setup.sh config 00:02:30.316 0000:c9:00.0 (8086 0a54): nvme -> vfio-pci 00:02:30.316 03:59:43 -- setup/acl.sh@47 -- # verify 0000:ca:00.0 00:02:30.316 03:59:43 -- setup/acl.sh@28 -- # local dev driver 00:02:30.316 03:59:43 -- setup/acl.sh@30 -- # for dev in "$@" 00:02:30.316 03:59:43 -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:ca:00.0 ]] 00:02:30.316 03:59:43 -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:ca:00.0/driver 00:02:30.316 03:59:43 -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:02:30.316 03:59:43 -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:02:30.316 03:59:43 -- setup/acl.sh@48 -- # setup reset 00:02:30.316 03:59:43 -- setup/common.sh@9 -- # [[ reset == output ]] 00:02:30.316 03:59:43 -- setup/common.sh@12 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/setup.sh reset 00:02:32.860 00:02:32.860 real 0m7.751s 00:02:32.860 user 0m1.951s 00:02:32.860 sys 0m3.511s 00:02:32.860 03:59:46 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:02:32.860 03:59:46 -- common/autotest_common.sh@10 -- # set +x 00:02:32.860 ************************************ 00:02:32.860 END TEST allowed 00:02:32.860 ************************************ 00:02:32.860 00:02:32.860 real 0m22.898s 00:02:32.860 user 0m5.904s 00:02:32.860 sys 0m11.226s 00:02:32.860 03:59:47 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:02:32.860 03:59:47 -- common/autotest_common.sh@10 -- # set +x 00:02:32.860 ************************************ 00:02:32.860 END TEST acl 00:02:32.860 ************************************ 00:02:32.860 03:59:47 -- setup/test-setup.sh@13 -- # run_test hugepages /var/jenkins/workspace/dsa-phy-autotest/spdk/test/setup/hugepages.sh 00:02:32.860 03:59:47 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:02:32.860 03:59:47 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:02:32.860 03:59:47 -- common/autotest_common.sh@10 -- # set +x 00:02:32.860 ************************************ 00:02:32.860 START TEST hugepages 00:02:32.860 ************************************ 00:02:32.860 03:59:47 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/setup/hugepages.sh 00:02:32.860 * Looking for test storage... 00:02:32.860 * Found test storage at /var/jenkins/workspace/dsa-phy-autotest/spdk/test/setup 00:02:32.860 03:59:47 -- setup/hugepages.sh@10 -- # nodes_sys=() 00:02:32.860 03:59:47 -- setup/hugepages.sh@10 -- # declare -a nodes_sys 00:02:32.860 03:59:47 -- setup/hugepages.sh@12 -- # declare -i default_hugepages=0 00:02:32.860 03:59:47 -- setup/hugepages.sh@13 -- # declare -i no_nodes=0 00:02:32.860 03:59:47 -- setup/hugepages.sh@14 -- # declare -i nr_hugepages=0 00:02:32.860 03:59:47 -- setup/hugepages.sh@16 -- # get_meminfo Hugepagesize 00:02:32.860 03:59:47 -- setup/common.sh@17 -- # local get=Hugepagesize 00:02:32.860 03:59:47 -- setup/common.sh@18 -- # local node= 00:02:32.860 03:59:47 -- setup/common.sh@19 -- # local var val 00:02:32.860 03:59:47 -- setup/common.sh@20 -- # local mem_f mem 00:02:32.860 03:59:47 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:32.860 03:59:47 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:32.860 03:59:47 -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:32.860 03:59:47 -- setup/common.sh@28 -- # mapfile -t mem 00:02:32.860 03:59:47 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:32.860 03:59:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:32.860 03:59:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:32.860 03:59:47 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 258558476 kB' 'MemFree: 239388788 kB' 'MemAvailable: 243025936 kB' 'Buffers: 2696 kB' 'Cached: 10892636 kB' 'SwapCached: 0 kB' 'Active: 7029172 kB' 'Inactive: 4390304 kB' 'Active(anon): 6458356 kB' 'Inactive(anon): 0 kB' 'Active(file): 570816 kB' 'Inactive(file): 4390304 kB' 'Unevictable: 9000 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 533400 kB' 'Mapped: 211652 kB' 'Shmem: 5934212 kB' 'KReclaimable: 310160 kB' 'Slab: 964708 kB' 'SReclaimable: 310160 kB' 'SUnreclaim: 654548 kB' 'KernelStack: 25552 kB' 'PageTables: 11016 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 135570688 kB' 'Committed_AS: 8115032 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 330356 kB' 'VmallocChunk: 0 kB' 'Percpu: 104448 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 2048' 'HugePages_Free: 2048' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 4194304 kB' 'DirectMap4k: 3262528 kB' 'DirectMap2M: 16437248 kB' 'DirectMap1G: 250609664 kB' 00:02:32.860 03:59:47 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:32.860 03:59:47 -- setup/common.sh@32 -- # continue 00:02:32.860 03:59:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:32.860 03:59:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:32.860 03:59:47 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:32.860 03:59:47 -- setup/common.sh@32 -- # continue 00:02:32.860 03:59:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:32.860 03:59:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:32.860 03:59:47 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:32.860 03:59:47 -- setup/common.sh@32 -- # continue 00:02:32.860 03:59:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:32.860 03:59:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:32.861 03:59:47 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:32.861 03:59:47 -- setup/common.sh@32 -- # continue 00:02:32.861 03:59:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:32.861 03:59:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:32.861 03:59:47 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:32.861 03:59:47 -- setup/common.sh@32 -- # continue 00:02:32.861 03:59:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:32.861 03:59:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:32.861 03:59:47 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:32.861 03:59:47 -- setup/common.sh@32 -- # continue 00:02:32.861 03:59:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:32.861 03:59:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:32.861 03:59:47 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:32.861 03:59:47 -- setup/common.sh@32 -- # continue 00:02:32.861 03:59:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:32.861 03:59:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:32.861 03:59:47 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:32.861 03:59:47 -- setup/common.sh@32 -- # continue 00:02:32.861 03:59:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:32.861 03:59:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:32.861 03:59:47 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:32.861 03:59:47 -- setup/common.sh@32 -- # continue 00:02:32.861 03:59:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:32.861 03:59:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:32.861 03:59:47 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:32.861 03:59:47 -- setup/common.sh@32 -- # continue 00:02:32.861 03:59:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:32.861 03:59:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:32.861 03:59:47 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:32.861 03:59:47 -- setup/common.sh@32 -- # continue 00:02:32.861 03:59:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:32.861 03:59:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:32.861 03:59:47 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:32.861 03:59:47 -- setup/common.sh@32 -- # continue 00:02:32.861 03:59:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:32.861 03:59:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:32.861 03:59:47 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:32.861 03:59:47 -- setup/common.sh@32 -- # continue 00:02:32.861 03:59:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:32.861 03:59:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:32.861 03:59:47 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:32.861 03:59:47 -- setup/common.sh@32 -- # continue 00:02:32.861 03:59:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:32.861 03:59:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:32.861 03:59:47 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:32.861 03:59:47 -- setup/common.sh@32 -- # continue 00:02:32.861 03:59:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:32.861 03:59:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:32.861 03:59:47 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:32.861 03:59:47 -- setup/common.sh@32 -- # continue 00:02:32.861 03:59:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:32.861 03:59:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:32.861 03:59:47 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:32.861 03:59:47 -- setup/common.sh@32 -- # continue 00:02:32.861 03:59:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:32.861 03:59:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:32.861 03:59:47 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:32.861 03:59:47 -- setup/common.sh@32 -- # continue 00:02:32.861 03:59:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:32.861 03:59:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:32.861 03:59:47 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:32.861 03:59:47 -- setup/common.sh@32 -- # continue 00:02:32.861 03:59:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:32.861 03:59:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:32.861 03:59:47 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:32.861 03:59:47 -- setup/common.sh@32 -- # continue 00:02:32.861 03:59:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:32.861 03:59:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:32.861 03:59:47 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:32.861 03:59:47 -- setup/common.sh@32 -- # continue 00:02:32.861 03:59:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:32.861 03:59:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:32.861 03:59:47 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:32.861 03:59:47 -- setup/common.sh@32 -- # continue 00:02:32.861 03:59:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:32.861 03:59:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:32.861 03:59:47 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:32.861 03:59:47 -- setup/common.sh@32 -- # continue 00:02:32.861 03:59:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:32.861 03:59:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:32.861 03:59:47 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:32.861 03:59:47 -- setup/common.sh@32 -- # continue 00:02:32.861 03:59:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:32.861 03:59:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:32.861 03:59:47 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:32.861 03:59:47 -- setup/common.sh@32 -- # continue 00:02:32.861 03:59:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:32.861 03:59:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:32.861 03:59:47 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:32.861 03:59:47 -- setup/common.sh@32 -- # continue 00:02:32.861 03:59:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:32.861 03:59:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:32.861 03:59:47 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:32.861 03:59:47 -- setup/common.sh@32 -- # continue 00:02:32.861 03:59:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:32.861 03:59:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:32.861 03:59:47 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:32.861 03:59:47 -- setup/common.sh@32 -- # continue 00:02:32.861 03:59:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:32.861 03:59:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:32.861 03:59:47 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:32.861 03:59:47 -- setup/common.sh@32 -- # continue 00:02:32.861 03:59:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:32.861 03:59:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:32.861 03:59:47 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:32.861 03:59:47 -- setup/common.sh@32 -- # continue 00:02:32.861 03:59:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:32.861 03:59:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:32.861 03:59:47 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:32.861 03:59:47 -- setup/common.sh@32 -- # continue 00:02:32.861 03:59:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:32.861 03:59:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:32.861 03:59:47 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:32.861 03:59:47 -- setup/common.sh@32 -- # continue 00:02:32.861 03:59:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:32.861 03:59:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:32.861 03:59:47 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:32.861 03:59:47 -- setup/common.sh@32 -- # continue 00:02:32.861 03:59:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:32.861 03:59:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:32.861 03:59:47 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:32.861 03:59:47 -- setup/common.sh@32 -- # continue 00:02:32.861 03:59:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:32.861 03:59:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:32.861 03:59:47 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:32.861 03:59:47 -- setup/common.sh@32 -- # continue 00:02:32.861 03:59:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:32.861 03:59:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:32.861 03:59:47 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:32.861 03:59:47 -- setup/common.sh@32 -- # continue 00:02:32.861 03:59:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:32.861 03:59:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:32.861 03:59:47 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:32.861 03:59:47 -- setup/common.sh@32 -- # continue 00:02:32.861 03:59:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:32.861 03:59:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:32.861 03:59:47 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:32.861 03:59:47 -- setup/common.sh@32 -- # continue 00:02:32.861 03:59:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:32.861 03:59:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:32.861 03:59:47 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:32.861 03:59:47 -- setup/common.sh@32 -- # continue 00:02:32.861 03:59:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:32.861 03:59:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:32.861 03:59:47 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:32.861 03:59:47 -- setup/common.sh@32 -- # continue 00:02:32.861 03:59:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:32.861 03:59:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:32.861 03:59:47 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:32.861 03:59:47 -- setup/common.sh@32 -- # continue 00:02:32.861 03:59:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:32.861 03:59:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:32.861 03:59:47 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:32.861 03:59:47 -- setup/common.sh@32 -- # continue 00:02:32.861 03:59:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:32.861 03:59:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:32.861 03:59:47 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:32.861 03:59:47 -- setup/common.sh@32 -- # continue 00:02:32.861 03:59:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:32.861 03:59:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:32.861 03:59:47 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:32.861 03:59:47 -- setup/common.sh@32 -- # continue 00:02:32.861 03:59:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:32.861 03:59:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:32.861 03:59:47 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:32.861 03:59:47 -- setup/common.sh@32 -- # continue 00:02:32.861 03:59:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:32.861 03:59:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:32.861 03:59:47 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:32.861 03:59:47 -- setup/common.sh@32 -- # continue 00:02:32.862 03:59:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:32.862 03:59:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:32.862 03:59:47 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:32.862 03:59:47 -- setup/common.sh@32 -- # continue 00:02:32.862 03:59:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:32.862 03:59:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:32.862 03:59:47 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:32.862 03:59:47 -- setup/common.sh@32 -- # continue 00:02:32.862 03:59:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:32.862 03:59:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:32.862 03:59:47 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:32.862 03:59:47 -- setup/common.sh@32 -- # continue 00:02:32.862 03:59:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:32.862 03:59:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:32.862 03:59:47 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:32.862 03:59:47 -- setup/common.sh@32 -- # continue 00:02:32.862 03:59:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:32.862 03:59:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:32.862 03:59:47 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:32.862 03:59:47 -- setup/common.sh@32 -- # continue 00:02:32.862 03:59:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:32.862 03:59:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:32.862 03:59:47 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:32.862 03:59:47 -- setup/common.sh@32 -- # continue 00:02:32.862 03:59:47 -- setup/common.sh@31 -- # IFS=': ' 00:02:32.862 03:59:47 -- setup/common.sh@31 -- # read -r var val _ 00:02:32.862 03:59:47 -- setup/common.sh@32 -- # [[ Hugepagesize == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:32.862 03:59:47 -- setup/common.sh@33 -- # echo 2048 00:02:32.862 03:59:47 -- setup/common.sh@33 -- # return 0 00:02:32.862 03:59:47 -- setup/hugepages.sh@16 -- # default_hugepages=2048 00:02:32.862 03:59:47 -- setup/hugepages.sh@17 -- # default_huge_nr=/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages 00:02:32.862 03:59:47 -- setup/hugepages.sh@18 -- # global_huge_nr=/proc/sys/vm/nr_hugepages 00:02:32.862 03:59:47 -- setup/hugepages.sh@21 -- # unset -v HUGE_EVEN_ALLOC 00:02:32.862 03:59:47 -- setup/hugepages.sh@22 -- # unset -v HUGEMEM 00:02:32.862 03:59:47 -- setup/hugepages.sh@23 -- # unset -v HUGENODE 00:02:32.862 03:59:47 -- setup/hugepages.sh@24 -- # unset -v NRHUGE 00:02:32.862 03:59:47 -- setup/hugepages.sh@207 -- # get_nodes 00:02:32.862 03:59:47 -- setup/hugepages.sh@27 -- # local node 00:02:32.862 03:59:47 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:32.862 03:59:47 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=2048 00:02:32.862 03:59:47 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:32.862 03:59:47 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:02:32.862 03:59:47 -- setup/hugepages.sh@32 -- # no_nodes=2 00:02:32.862 03:59:47 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:02:32.862 03:59:47 -- setup/hugepages.sh@208 -- # clear_hp 00:02:32.862 03:59:47 -- setup/hugepages.sh@37 -- # local node hp 00:02:32.862 03:59:47 -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:02:32.862 03:59:47 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:02:32.862 03:59:47 -- setup/hugepages.sh@41 -- # echo 0 00:02:32.862 03:59:47 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:02:32.862 03:59:47 -- setup/hugepages.sh@41 -- # echo 0 00:02:32.862 03:59:47 -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:02:32.862 03:59:47 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:02:32.862 03:59:47 -- setup/hugepages.sh@41 -- # echo 0 00:02:32.862 03:59:47 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:02:32.862 03:59:47 -- setup/hugepages.sh@41 -- # echo 0 00:02:32.862 03:59:47 -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:02:32.862 03:59:47 -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:02:32.862 03:59:47 -- setup/hugepages.sh@210 -- # run_test default_setup default_setup 00:02:32.862 03:59:47 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:02:32.862 03:59:47 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:02:32.862 03:59:47 -- common/autotest_common.sh@10 -- # set +x 00:02:32.862 ************************************ 00:02:32.862 START TEST default_setup 00:02:32.862 ************************************ 00:02:32.862 03:59:47 -- common/autotest_common.sh@1104 -- # default_setup 00:02:32.862 03:59:47 -- setup/hugepages.sh@136 -- # get_test_nr_hugepages 2097152 0 00:02:32.862 03:59:47 -- setup/hugepages.sh@49 -- # local size=2097152 00:02:32.862 03:59:47 -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:02:32.862 03:59:47 -- setup/hugepages.sh@51 -- # shift 00:02:32.862 03:59:47 -- setup/hugepages.sh@52 -- # node_ids=('0') 00:02:32.862 03:59:47 -- setup/hugepages.sh@52 -- # local node_ids 00:02:32.862 03:59:47 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:02:32.862 03:59:47 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:02:32.862 03:59:47 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:02:32.862 03:59:47 -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:02:32.862 03:59:47 -- setup/hugepages.sh@62 -- # local user_nodes 00:02:32.862 03:59:47 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:02:32.862 03:59:47 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:02:32.862 03:59:47 -- setup/hugepages.sh@67 -- # nodes_test=() 00:02:32.862 03:59:47 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:02:32.862 03:59:47 -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:02:32.862 03:59:47 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:02:32.862 03:59:47 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:02:32.862 03:59:47 -- setup/hugepages.sh@73 -- # return 0 00:02:32.862 03:59:47 -- setup/hugepages.sh@137 -- # setup output 00:02:32.862 03:59:47 -- setup/common.sh@9 -- # [[ output == output ]] 00:02:32.862 03:59:47 -- setup/common.sh@10 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/setup.sh 00:02:36.223 0000:74:02.0 (8086 0cfe): idxd -> vfio-pci 00:02:36.223 0000:f1:02.0 (8086 0cfe): idxd -> vfio-pci 00:02:36.223 0000:79:02.0 (8086 0cfe): idxd -> vfio-pci 00:02:36.223 0000:6f:01.0 (8086 0b25): idxd -> vfio-pci 00:02:36.223 0000:6f:02.0 (8086 0cfe): idxd -> vfio-pci 00:02:36.223 0000:f6:01.0 (8086 0b25): idxd -> vfio-pci 00:02:36.223 0000:f6:02.0 (8086 0cfe): idxd -> vfio-pci 00:02:36.223 0000:74:01.0 (8086 0b25): idxd -> vfio-pci 00:02:36.223 0000:6a:02.0 (8086 0cfe): idxd -> vfio-pci 00:02:36.223 0000:79:01.0 (8086 0b25): idxd -> vfio-pci 00:02:36.223 0000:ec:01.0 (8086 0b25): idxd -> vfio-pci 00:02:36.223 0000:6a:01.0 (8086 0b25): idxd -> vfio-pci 00:02:36.223 0000:ec:02.0 (8086 0cfe): idxd -> vfio-pci 00:02:36.223 0000:e7:01.0 (8086 0b25): idxd -> vfio-pci 00:02:36.223 0000:e7:02.0 (8086 0cfe): idxd -> vfio-pci 00:02:36.223 0000:f1:01.0 (8086 0b25): idxd -> vfio-pci 00:02:37.609 0000:c9:00.0 (8086 0a54): nvme -> vfio-pci 00:02:38.185 0000:ca:00.0 (8086 0a54): nvme -> vfio-pci 00:02:38.185 03:59:52 -- setup/hugepages.sh@138 -- # verify_nr_hugepages 00:02:38.185 03:59:52 -- setup/hugepages.sh@89 -- # local node 00:02:38.185 03:59:52 -- setup/hugepages.sh@90 -- # local sorted_t 00:02:38.185 03:59:52 -- setup/hugepages.sh@91 -- # local sorted_s 00:02:38.185 03:59:52 -- setup/hugepages.sh@92 -- # local surp 00:02:38.185 03:59:52 -- setup/hugepages.sh@93 -- # local resv 00:02:38.185 03:59:52 -- setup/hugepages.sh@94 -- # local anon 00:02:38.185 03:59:52 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:02:38.185 03:59:52 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:02:38.185 03:59:52 -- setup/common.sh@17 -- # local get=AnonHugePages 00:02:38.185 03:59:52 -- setup/common.sh@18 -- # local node= 00:02:38.185 03:59:52 -- setup/common.sh@19 -- # local var val 00:02:38.185 03:59:52 -- setup/common.sh@20 -- # local mem_f mem 00:02:38.185 03:59:52 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:38.185 03:59:52 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:38.185 03:59:52 -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:38.185 03:59:52 -- setup/common.sh@28 -- # mapfile -t mem 00:02:38.185 03:59:52 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:38.185 03:59:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.185 03:59:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.185 03:59:52 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 258558476 kB' 'MemFree: 241731908 kB' 'MemAvailable: 245368752 kB' 'Buffers: 2696 kB' 'Cached: 10892912 kB' 'SwapCached: 0 kB' 'Active: 7039628 kB' 'Inactive: 4390304 kB' 'Active(anon): 6468812 kB' 'Inactive(anon): 0 kB' 'Active(file): 570816 kB' 'Inactive(file): 4390304 kB' 'Unevictable: 9000 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 543852 kB' 'Mapped: 211632 kB' 'Shmem: 5934488 kB' 'KReclaimable: 309552 kB' 'Slab: 955936 kB' 'SReclaimable: 309552 kB' 'SUnreclaim: 646384 kB' 'KernelStack: 24928 kB' 'PageTables: 10164 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 136619264 kB' 'Committed_AS: 8112112 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 329604 kB' 'VmallocChunk: 0 kB' 'Percpu: 104448 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3262528 kB' 'DirectMap2M: 16437248 kB' 'DirectMap1G: 250609664 kB' 00:02:38.185 03:59:52 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:38.185 03:59:52 -- setup/common.sh@32 -- # continue 00:02:38.185 03:59:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.185 03:59:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.185 03:59:52 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:38.185 03:59:52 -- setup/common.sh@32 -- # continue 00:02:38.185 03:59:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.185 03:59:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.185 03:59:52 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:38.185 03:59:52 -- setup/common.sh@32 -- # continue 00:02:38.185 03:59:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.185 03:59:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.185 03:59:52 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:38.185 03:59:52 -- setup/common.sh@32 -- # continue 00:02:38.185 03:59:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.185 03:59:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.185 03:59:52 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:38.185 03:59:52 -- setup/common.sh@32 -- # continue 00:02:38.185 03:59:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.185 03:59:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.185 03:59:52 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:38.185 03:59:52 -- setup/common.sh@32 -- # continue 00:02:38.185 03:59:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.185 03:59:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.185 03:59:52 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:38.185 03:59:52 -- setup/common.sh@32 -- # continue 00:02:38.185 03:59:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.185 03:59:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.185 03:59:52 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:38.185 03:59:52 -- setup/common.sh@32 -- # continue 00:02:38.185 03:59:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.185 03:59:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.185 03:59:52 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:38.185 03:59:52 -- setup/common.sh@32 -- # continue 00:02:38.185 03:59:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.185 03:59:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.185 03:59:52 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:38.185 03:59:52 -- setup/common.sh@32 -- # continue 00:02:38.185 03:59:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.185 03:59:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.185 03:59:52 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:38.185 03:59:52 -- setup/common.sh@32 -- # continue 00:02:38.185 03:59:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.185 03:59:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.185 03:59:52 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:38.185 03:59:52 -- setup/common.sh@32 -- # continue 00:02:38.185 03:59:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.185 03:59:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.185 03:59:52 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:38.185 03:59:52 -- setup/common.sh@32 -- # continue 00:02:38.185 03:59:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.185 03:59:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.185 03:59:52 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:38.185 03:59:52 -- setup/common.sh@32 -- # continue 00:02:38.185 03:59:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.185 03:59:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.185 03:59:52 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:38.185 03:59:52 -- setup/common.sh@32 -- # continue 00:02:38.185 03:59:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.185 03:59:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.185 03:59:52 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:38.185 03:59:52 -- setup/common.sh@32 -- # continue 00:02:38.185 03:59:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.185 03:59:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.185 03:59:52 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:38.185 03:59:52 -- setup/common.sh@32 -- # continue 00:02:38.185 03:59:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.185 03:59:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.185 03:59:52 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:38.185 03:59:52 -- setup/common.sh@32 -- # continue 00:02:38.185 03:59:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.185 03:59:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.185 03:59:52 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:38.185 03:59:52 -- setup/common.sh@32 -- # continue 00:02:38.185 03:59:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.185 03:59:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.185 03:59:52 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:38.185 03:59:52 -- setup/common.sh@32 -- # continue 00:02:38.185 03:59:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.185 03:59:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.185 03:59:52 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:38.185 03:59:52 -- setup/common.sh@32 -- # continue 00:02:38.185 03:59:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.185 03:59:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.185 03:59:52 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:38.185 03:59:52 -- setup/common.sh@32 -- # continue 00:02:38.185 03:59:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.185 03:59:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.185 03:59:52 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:38.185 03:59:52 -- setup/common.sh@32 -- # continue 00:02:38.185 03:59:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.185 03:59:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.185 03:59:52 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:38.185 03:59:52 -- setup/common.sh@32 -- # continue 00:02:38.185 03:59:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.185 03:59:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.185 03:59:52 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:38.185 03:59:52 -- setup/common.sh@32 -- # continue 00:02:38.185 03:59:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.185 03:59:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.185 03:59:52 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:38.185 03:59:52 -- setup/common.sh@32 -- # continue 00:02:38.185 03:59:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.185 03:59:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.185 03:59:52 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:38.185 03:59:52 -- setup/common.sh@32 -- # continue 00:02:38.185 03:59:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.185 03:59:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.185 03:59:52 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:38.185 03:59:52 -- setup/common.sh@32 -- # continue 00:02:38.185 03:59:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.185 03:59:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.185 03:59:52 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:38.185 03:59:52 -- setup/common.sh@32 -- # continue 00:02:38.185 03:59:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.185 03:59:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.185 03:59:52 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:38.185 03:59:52 -- setup/common.sh@32 -- # continue 00:02:38.185 03:59:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.185 03:59:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.185 03:59:52 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:38.185 03:59:52 -- setup/common.sh@32 -- # continue 00:02:38.185 03:59:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.185 03:59:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.185 03:59:52 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:38.185 03:59:52 -- setup/common.sh@32 -- # continue 00:02:38.185 03:59:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.185 03:59:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.185 03:59:52 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:38.185 03:59:52 -- setup/common.sh@32 -- # continue 00:02:38.185 03:59:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.185 03:59:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.185 03:59:52 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:38.185 03:59:52 -- setup/common.sh@32 -- # continue 00:02:38.185 03:59:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.185 03:59:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.185 03:59:52 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:38.185 03:59:52 -- setup/common.sh@32 -- # continue 00:02:38.185 03:59:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.185 03:59:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.185 03:59:52 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:38.185 03:59:52 -- setup/common.sh@32 -- # continue 00:02:38.185 03:59:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.185 03:59:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.185 03:59:52 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:38.185 03:59:52 -- setup/common.sh@32 -- # continue 00:02:38.185 03:59:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.185 03:59:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.185 03:59:52 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:38.185 03:59:52 -- setup/common.sh@32 -- # continue 00:02:38.185 03:59:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.185 03:59:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.185 03:59:52 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:38.185 03:59:52 -- setup/common.sh@32 -- # continue 00:02:38.185 03:59:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.185 03:59:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.185 03:59:52 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:38.185 03:59:52 -- setup/common.sh@32 -- # continue 00:02:38.185 03:59:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.185 03:59:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.185 03:59:52 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:38.185 03:59:52 -- setup/common.sh@33 -- # echo 0 00:02:38.185 03:59:52 -- setup/common.sh@33 -- # return 0 00:02:38.185 03:59:52 -- setup/hugepages.sh@97 -- # anon=0 00:02:38.185 03:59:52 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:02:38.185 03:59:52 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:38.185 03:59:52 -- setup/common.sh@18 -- # local node= 00:02:38.185 03:59:52 -- setup/common.sh@19 -- # local var val 00:02:38.185 03:59:52 -- setup/common.sh@20 -- # local mem_f mem 00:02:38.185 03:59:52 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:38.185 03:59:52 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:38.185 03:59:52 -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:38.185 03:59:52 -- setup/common.sh@28 -- # mapfile -t mem 00:02:38.185 03:59:52 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:38.185 03:59:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.185 03:59:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.186 03:59:52 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 258558476 kB' 'MemFree: 241730912 kB' 'MemAvailable: 245367756 kB' 'Buffers: 2696 kB' 'Cached: 10892912 kB' 'SwapCached: 0 kB' 'Active: 7039788 kB' 'Inactive: 4390304 kB' 'Active(anon): 6468972 kB' 'Inactive(anon): 0 kB' 'Active(file): 570816 kB' 'Inactive(file): 4390304 kB' 'Unevictable: 9000 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 543980 kB' 'Mapped: 211632 kB' 'Shmem: 5934488 kB' 'KReclaimable: 309552 kB' 'Slab: 955916 kB' 'SReclaimable: 309552 kB' 'SUnreclaim: 646364 kB' 'KernelStack: 24816 kB' 'PageTables: 9548 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 136619264 kB' 'Committed_AS: 8112124 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 329604 kB' 'VmallocChunk: 0 kB' 'Percpu: 104448 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3262528 kB' 'DirectMap2M: 16437248 kB' 'DirectMap1G: 250609664 kB' 00:02:38.186 03:59:52 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.186 03:59:52 -- setup/common.sh@32 -- # continue 00:02:38.186 03:59:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.186 03:59:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.186 03:59:52 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.186 03:59:52 -- setup/common.sh@32 -- # continue 00:02:38.186 03:59:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.186 03:59:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.186 03:59:52 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.186 03:59:52 -- setup/common.sh@32 -- # continue 00:02:38.186 03:59:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.186 03:59:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.186 03:59:52 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.186 03:59:52 -- setup/common.sh@32 -- # continue 00:02:38.186 03:59:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.186 03:59:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.186 03:59:52 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.186 03:59:52 -- setup/common.sh@32 -- # continue 00:02:38.186 03:59:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.186 03:59:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.186 03:59:52 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.186 03:59:52 -- setup/common.sh@32 -- # continue 00:02:38.186 03:59:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.186 03:59:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.186 03:59:52 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.186 03:59:52 -- setup/common.sh@32 -- # continue 00:02:38.186 03:59:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.186 03:59:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.186 03:59:52 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.186 03:59:52 -- setup/common.sh@32 -- # continue 00:02:38.186 03:59:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.186 03:59:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.186 03:59:52 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.186 03:59:52 -- setup/common.sh@32 -- # continue 00:02:38.186 03:59:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.186 03:59:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.186 03:59:52 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.186 03:59:52 -- setup/common.sh@32 -- # continue 00:02:38.186 03:59:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.186 03:59:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.186 03:59:52 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.186 03:59:52 -- setup/common.sh@32 -- # continue 00:02:38.186 03:59:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.186 03:59:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.186 03:59:52 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.186 03:59:52 -- setup/common.sh@32 -- # continue 00:02:38.186 03:59:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.186 03:59:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.186 03:59:52 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.186 03:59:52 -- setup/common.sh@32 -- # continue 00:02:38.186 03:59:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.186 03:59:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.186 03:59:52 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.186 03:59:52 -- setup/common.sh@32 -- # continue 00:02:38.186 03:59:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.186 03:59:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.186 03:59:52 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.186 03:59:52 -- setup/common.sh@32 -- # continue 00:02:38.186 03:59:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.186 03:59:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.186 03:59:52 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.186 03:59:52 -- setup/common.sh@32 -- # continue 00:02:38.186 03:59:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.186 03:59:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.186 03:59:52 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.186 03:59:52 -- setup/common.sh@32 -- # continue 00:02:38.186 03:59:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.186 03:59:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.186 03:59:52 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.186 03:59:52 -- setup/common.sh@32 -- # continue 00:02:38.186 03:59:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.186 03:59:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.186 03:59:52 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.186 03:59:52 -- setup/common.sh@32 -- # continue 00:02:38.186 03:59:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.186 03:59:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.186 03:59:52 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.186 03:59:52 -- setup/common.sh@32 -- # continue 00:02:38.186 03:59:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.186 03:59:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.186 03:59:52 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.186 03:59:52 -- setup/common.sh@32 -- # continue 00:02:38.186 03:59:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.186 03:59:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.186 03:59:52 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.186 03:59:52 -- setup/common.sh@32 -- # continue 00:02:38.186 03:59:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.186 03:59:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.186 03:59:52 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.186 03:59:52 -- setup/common.sh@32 -- # continue 00:02:38.186 03:59:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.186 03:59:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.186 03:59:52 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.186 03:59:52 -- setup/common.sh@32 -- # continue 00:02:38.186 03:59:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.186 03:59:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.186 03:59:52 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.186 03:59:52 -- setup/common.sh@32 -- # continue 00:02:38.186 03:59:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.186 03:59:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.186 03:59:52 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.186 03:59:52 -- setup/common.sh@32 -- # continue 00:02:38.186 03:59:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.186 03:59:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.186 03:59:52 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.186 03:59:52 -- setup/common.sh@32 -- # continue 00:02:38.186 03:59:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.186 03:59:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.186 03:59:52 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.186 03:59:52 -- setup/common.sh@32 -- # continue 00:02:38.186 03:59:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.186 03:59:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.186 03:59:52 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.186 03:59:52 -- setup/common.sh@32 -- # continue 00:02:38.186 03:59:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.186 03:59:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.186 03:59:52 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.186 03:59:52 -- setup/common.sh@32 -- # continue 00:02:38.186 03:59:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.186 03:59:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.186 03:59:52 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.186 03:59:52 -- setup/common.sh@32 -- # continue 00:02:38.186 03:59:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.186 03:59:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.186 03:59:52 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.186 03:59:52 -- setup/common.sh@32 -- # continue 00:02:38.186 03:59:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.186 03:59:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.186 03:59:52 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.186 03:59:52 -- setup/common.sh@32 -- # continue 00:02:38.186 03:59:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.186 03:59:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.186 03:59:52 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.186 03:59:52 -- setup/common.sh@32 -- # continue 00:02:38.186 03:59:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.186 03:59:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.186 03:59:52 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.186 03:59:52 -- setup/common.sh@32 -- # continue 00:02:38.186 03:59:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.186 03:59:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.186 03:59:52 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.186 03:59:52 -- setup/common.sh@32 -- # continue 00:02:38.186 03:59:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.186 03:59:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.186 03:59:52 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.186 03:59:52 -- setup/common.sh@32 -- # continue 00:02:38.186 03:59:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.186 03:59:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.186 03:59:52 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.186 03:59:52 -- setup/common.sh@32 -- # continue 00:02:38.186 03:59:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.186 03:59:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.186 03:59:52 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.186 03:59:52 -- setup/common.sh@32 -- # continue 00:02:38.186 03:59:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.186 03:59:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.186 03:59:52 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.186 03:59:52 -- setup/common.sh@32 -- # continue 00:02:38.186 03:59:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.186 03:59:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.186 03:59:52 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.186 03:59:52 -- setup/common.sh@32 -- # continue 00:02:38.186 03:59:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.186 03:59:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.186 03:59:52 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.186 03:59:52 -- setup/common.sh@32 -- # continue 00:02:38.186 03:59:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.186 03:59:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.186 03:59:52 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.186 03:59:52 -- setup/common.sh@32 -- # continue 00:02:38.186 03:59:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.186 03:59:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.186 03:59:52 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.186 03:59:52 -- setup/common.sh@32 -- # continue 00:02:38.186 03:59:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.186 03:59:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.186 03:59:52 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.186 03:59:52 -- setup/common.sh@32 -- # continue 00:02:38.186 03:59:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.186 03:59:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.186 03:59:52 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.186 03:59:52 -- setup/common.sh@32 -- # continue 00:02:38.186 03:59:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.186 03:59:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.186 03:59:52 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.186 03:59:52 -- setup/common.sh@32 -- # continue 00:02:38.186 03:59:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.186 03:59:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.186 03:59:52 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.186 03:59:52 -- setup/common.sh@32 -- # continue 00:02:38.186 03:59:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.186 03:59:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.186 03:59:52 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.186 03:59:52 -- setup/common.sh@32 -- # continue 00:02:38.186 03:59:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.186 03:59:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.186 03:59:52 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.186 03:59:52 -- setup/common.sh@32 -- # continue 00:02:38.186 03:59:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.186 03:59:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.186 03:59:52 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.186 03:59:52 -- setup/common.sh@32 -- # continue 00:02:38.186 03:59:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.186 03:59:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.186 03:59:52 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.186 03:59:52 -- setup/common.sh@33 -- # echo 0 00:02:38.186 03:59:52 -- setup/common.sh@33 -- # return 0 00:02:38.186 03:59:52 -- setup/hugepages.sh@99 -- # surp=0 00:02:38.186 03:59:52 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:02:38.186 03:59:52 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:02:38.186 03:59:52 -- setup/common.sh@18 -- # local node= 00:02:38.186 03:59:52 -- setup/common.sh@19 -- # local var val 00:02:38.186 03:59:52 -- setup/common.sh@20 -- # local mem_f mem 00:02:38.186 03:59:52 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:38.186 03:59:52 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:38.186 03:59:52 -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:38.186 03:59:52 -- setup/common.sh@28 -- # mapfile -t mem 00:02:38.186 03:59:52 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:38.186 03:59:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.186 03:59:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.186 03:59:52 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 258558476 kB' 'MemFree: 241730080 kB' 'MemAvailable: 245366924 kB' 'Buffers: 2696 kB' 'Cached: 10892912 kB' 'SwapCached: 0 kB' 'Active: 7039396 kB' 'Inactive: 4390304 kB' 'Active(anon): 6468580 kB' 'Inactive(anon): 0 kB' 'Active(file): 570816 kB' 'Inactive(file): 4390304 kB' 'Unevictable: 9000 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 543532 kB' 'Mapped: 211672 kB' 'Shmem: 5934488 kB' 'KReclaimable: 309552 kB' 'Slab: 955988 kB' 'SReclaimable: 309552 kB' 'SUnreclaim: 646436 kB' 'KernelStack: 24880 kB' 'PageTables: 9904 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 136619264 kB' 'Committed_AS: 8112136 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 329588 kB' 'VmallocChunk: 0 kB' 'Percpu: 104448 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3262528 kB' 'DirectMap2M: 16437248 kB' 'DirectMap1G: 250609664 kB' 00:02:38.186 03:59:52 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:38.186 03:59:52 -- setup/common.sh@32 -- # continue 00:02:38.186 03:59:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.186 03:59:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.186 03:59:52 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:38.186 03:59:52 -- setup/common.sh@32 -- # continue 00:02:38.186 03:59:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.186 03:59:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.186 03:59:52 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:38.186 03:59:52 -- setup/common.sh@32 -- # continue 00:02:38.186 03:59:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.186 03:59:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.186 03:59:52 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:38.186 03:59:52 -- setup/common.sh@32 -- # continue 00:02:38.186 03:59:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.186 03:59:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.186 03:59:52 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:38.186 03:59:52 -- setup/common.sh@32 -- # continue 00:02:38.186 03:59:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.186 03:59:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.186 03:59:52 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:38.186 03:59:52 -- setup/common.sh@32 -- # continue 00:02:38.186 03:59:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.186 03:59:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.186 03:59:52 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:38.186 03:59:52 -- setup/common.sh@32 -- # continue 00:02:38.186 03:59:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.186 03:59:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.186 03:59:52 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:38.186 03:59:52 -- setup/common.sh@32 -- # continue 00:02:38.186 03:59:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.186 03:59:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.186 03:59:52 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:38.186 03:59:52 -- setup/common.sh@32 -- # continue 00:02:38.186 03:59:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.186 03:59:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.186 03:59:52 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:38.186 03:59:52 -- setup/common.sh@32 -- # continue 00:02:38.186 03:59:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.186 03:59:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.186 03:59:52 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:38.186 03:59:52 -- setup/common.sh@32 -- # continue 00:02:38.186 03:59:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.186 03:59:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.186 03:59:52 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:38.186 03:59:52 -- setup/common.sh@32 -- # continue 00:02:38.186 03:59:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.186 03:59:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.186 03:59:52 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:38.186 03:59:52 -- setup/common.sh@32 -- # continue 00:02:38.186 03:59:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.186 03:59:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.186 03:59:52 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:38.187 03:59:52 -- setup/common.sh@32 -- # continue 00:02:38.187 03:59:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.187 03:59:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.187 03:59:52 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:38.187 03:59:52 -- setup/common.sh@32 -- # continue 00:02:38.187 03:59:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.187 03:59:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.187 03:59:52 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:38.187 03:59:52 -- setup/common.sh@32 -- # continue 00:02:38.187 03:59:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.187 03:59:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.187 03:59:52 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:38.187 03:59:52 -- setup/common.sh@32 -- # continue 00:02:38.187 03:59:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.187 03:59:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.187 03:59:52 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:38.187 03:59:52 -- setup/common.sh@32 -- # continue 00:02:38.187 03:59:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.187 03:59:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.187 03:59:52 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:38.187 03:59:52 -- setup/common.sh@32 -- # continue 00:02:38.187 03:59:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.187 03:59:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.187 03:59:52 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:38.187 03:59:52 -- setup/common.sh@32 -- # continue 00:02:38.187 03:59:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.187 03:59:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.187 03:59:52 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:38.187 03:59:52 -- setup/common.sh@32 -- # continue 00:02:38.187 03:59:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.187 03:59:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.187 03:59:52 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:38.187 03:59:52 -- setup/common.sh@32 -- # continue 00:02:38.187 03:59:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.187 03:59:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.187 03:59:52 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:38.187 03:59:52 -- setup/common.sh@32 -- # continue 00:02:38.187 03:59:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.187 03:59:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.187 03:59:52 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:38.187 03:59:52 -- setup/common.sh@32 -- # continue 00:02:38.187 03:59:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.187 03:59:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.187 03:59:52 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:38.187 03:59:52 -- setup/common.sh@32 -- # continue 00:02:38.187 03:59:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.187 03:59:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.187 03:59:52 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:38.187 03:59:52 -- setup/common.sh@32 -- # continue 00:02:38.187 03:59:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.187 03:59:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.187 03:59:52 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:38.187 03:59:52 -- setup/common.sh@32 -- # continue 00:02:38.187 03:59:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.187 03:59:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.187 03:59:52 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:38.187 03:59:52 -- setup/common.sh@32 -- # continue 00:02:38.187 03:59:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.187 03:59:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.187 03:59:52 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:38.187 03:59:52 -- setup/common.sh@32 -- # continue 00:02:38.187 03:59:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.187 03:59:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.187 03:59:52 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:38.187 03:59:52 -- setup/common.sh@32 -- # continue 00:02:38.187 03:59:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.187 03:59:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.187 03:59:52 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:38.187 03:59:52 -- setup/common.sh@32 -- # continue 00:02:38.187 03:59:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.187 03:59:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.187 03:59:52 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:38.187 03:59:52 -- setup/common.sh@32 -- # continue 00:02:38.187 03:59:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.187 03:59:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.187 03:59:52 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:38.187 03:59:52 -- setup/common.sh@32 -- # continue 00:02:38.187 03:59:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.187 03:59:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.187 03:59:52 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:38.187 03:59:52 -- setup/common.sh@32 -- # continue 00:02:38.187 03:59:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.187 03:59:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.187 03:59:52 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:38.187 03:59:52 -- setup/common.sh@32 -- # continue 00:02:38.187 03:59:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.187 03:59:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.187 03:59:52 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:38.187 03:59:52 -- setup/common.sh@32 -- # continue 00:02:38.187 03:59:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.187 03:59:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.187 03:59:52 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:38.187 03:59:52 -- setup/common.sh@32 -- # continue 00:02:38.187 03:59:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.187 03:59:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.187 03:59:52 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:38.187 03:59:52 -- setup/common.sh@32 -- # continue 00:02:38.187 03:59:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.187 03:59:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.187 03:59:52 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:38.187 03:59:52 -- setup/common.sh@32 -- # continue 00:02:38.187 03:59:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.187 03:59:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.187 03:59:52 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:38.187 03:59:52 -- setup/common.sh@32 -- # continue 00:02:38.187 03:59:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.187 03:59:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.187 03:59:52 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:38.187 03:59:52 -- setup/common.sh@32 -- # continue 00:02:38.187 03:59:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.187 03:59:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.187 03:59:52 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:38.187 03:59:52 -- setup/common.sh@32 -- # continue 00:02:38.187 03:59:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.187 03:59:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.187 03:59:52 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:38.187 03:59:52 -- setup/common.sh@32 -- # continue 00:02:38.187 03:59:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.187 03:59:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.187 03:59:52 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:38.187 03:59:52 -- setup/common.sh@32 -- # continue 00:02:38.187 03:59:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.187 03:59:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.187 03:59:52 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:38.187 03:59:52 -- setup/common.sh@32 -- # continue 00:02:38.187 03:59:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.187 03:59:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.187 03:59:52 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:38.187 03:59:52 -- setup/common.sh@32 -- # continue 00:02:38.187 03:59:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.187 03:59:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.187 03:59:52 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:38.187 03:59:52 -- setup/common.sh@32 -- # continue 00:02:38.187 03:59:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.187 03:59:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.187 03:59:52 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:38.187 03:59:52 -- setup/common.sh@32 -- # continue 00:02:38.187 03:59:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.187 03:59:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.187 03:59:52 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:38.187 03:59:52 -- setup/common.sh@32 -- # continue 00:02:38.187 03:59:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.187 03:59:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.187 03:59:52 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:38.187 03:59:52 -- setup/common.sh@32 -- # continue 00:02:38.187 03:59:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.187 03:59:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.187 03:59:52 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:38.187 03:59:52 -- setup/common.sh@33 -- # echo 0 00:02:38.187 03:59:52 -- setup/common.sh@33 -- # return 0 00:02:38.187 03:59:52 -- setup/hugepages.sh@100 -- # resv=0 00:02:38.187 03:59:52 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:02:38.187 nr_hugepages=1024 00:02:38.187 03:59:52 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:02:38.187 resv_hugepages=0 00:02:38.187 03:59:52 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:02:38.187 surplus_hugepages=0 00:02:38.187 03:59:52 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:02:38.187 anon_hugepages=0 00:02:38.187 03:59:52 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:02:38.187 03:59:52 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:02:38.187 03:59:52 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:02:38.187 03:59:52 -- setup/common.sh@17 -- # local get=HugePages_Total 00:02:38.187 03:59:52 -- setup/common.sh@18 -- # local node= 00:02:38.187 03:59:52 -- setup/common.sh@19 -- # local var val 00:02:38.187 03:59:52 -- setup/common.sh@20 -- # local mem_f mem 00:02:38.187 03:59:52 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:38.187 03:59:52 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:38.187 03:59:52 -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:38.187 03:59:52 -- setup/common.sh@28 -- # mapfile -t mem 00:02:38.187 03:59:52 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:38.187 03:59:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.187 03:59:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.187 03:59:52 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 258558476 kB' 'MemFree: 241728960 kB' 'MemAvailable: 245365804 kB' 'Buffers: 2696 kB' 'Cached: 10892936 kB' 'SwapCached: 0 kB' 'Active: 7040348 kB' 'Inactive: 4390304 kB' 'Active(anon): 6469532 kB' 'Inactive(anon): 0 kB' 'Active(file): 570816 kB' 'Inactive(file): 4390304 kB' 'Unevictable: 9000 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 544456 kB' 'Mapped: 211672 kB' 'Shmem: 5934512 kB' 'KReclaimable: 309552 kB' 'Slab: 955988 kB' 'SReclaimable: 309552 kB' 'SUnreclaim: 646436 kB' 'KernelStack: 24912 kB' 'PageTables: 10168 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 136619264 kB' 'Committed_AS: 8110644 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 329604 kB' 'VmallocChunk: 0 kB' 'Percpu: 104448 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3262528 kB' 'DirectMap2M: 16437248 kB' 'DirectMap1G: 250609664 kB' 00:02:38.187 03:59:52 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:38.187 03:59:52 -- setup/common.sh@32 -- # continue 00:02:38.187 03:59:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.187 03:59:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.187 03:59:52 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:38.187 03:59:52 -- setup/common.sh@32 -- # continue 00:02:38.187 03:59:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.187 03:59:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.187 03:59:52 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:38.187 03:59:52 -- setup/common.sh@32 -- # continue 00:02:38.187 03:59:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.187 03:59:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.187 03:59:52 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:38.187 03:59:52 -- setup/common.sh@32 -- # continue 00:02:38.187 03:59:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.187 03:59:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.187 03:59:52 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:38.187 03:59:52 -- setup/common.sh@32 -- # continue 00:02:38.187 03:59:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.187 03:59:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.187 03:59:52 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:38.187 03:59:52 -- setup/common.sh@32 -- # continue 00:02:38.187 03:59:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.187 03:59:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.187 03:59:52 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:38.187 03:59:52 -- setup/common.sh@32 -- # continue 00:02:38.187 03:59:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.187 03:59:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.187 03:59:52 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:38.187 03:59:52 -- setup/common.sh@32 -- # continue 00:02:38.187 03:59:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.187 03:59:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.187 03:59:52 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:38.187 03:59:52 -- setup/common.sh@32 -- # continue 00:02:38.187 03:59:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.187 03:59:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.187 03:59:52 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:38.187 03:59:52 -- setup/common.sh@32 -- # continue 00:02:38.187 03:59:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.187 03:59:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.187 03:59:52 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:38.187 03:59:52 -- setup/common.sh@32 -- # continue 00:02:38.187 03:59:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.187 03:59:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.187 03:59:52 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:38.187 03:59:52 -- setup/common.sh@32 -- # continue 00:02:38.187 03:59:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.187 03:59:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.188 03:59:52 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:38.188 03:59:52 -- setup/common.sh@32 -- # continue 00:02:38.188 03:59:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.188 03:59:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.188 03:59:52 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:38.188 03:59:52 -- setup/common.sh@32 -- # continue 00:02:38.188 03:59:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.188 03:59:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.188 03:59:52 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:38.188 03:59:52 -- setup/common.sh@32 -- # continue 00:02:38.188 03:59:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.188 03:59:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.188 03:59:52 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:38.188 03:59:52 -- setup/common.sh@32 -- # continue 00:02:38.188 03:59:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.188 03:59:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.188 03:59:52 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:38.188 03:59:52 -- setup/common.sh@32 -- # continue 00:02:38.188 03:59:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.188 03:59:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.188 03:59:52 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:38.188 03:59:52 -- setup/common.sh@32 -- # continue 00:02:38.188 03:59:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.188 03:59:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.188 03:59:52 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:38.188 03:59:52 -- setup/common.sh@32 -- # continue 00:02:38.188 03:59:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.188 03:59:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.188 03:59:52 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:38.188 03:59:52 -- setup/common.sh@32 -- # continue 00:02:38.188 03:59:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.188 03:59:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.188 03:59:52 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:38.188 03:59:52 -- setup/common.sh@32 -- # continue 00:02:38.188 03:59:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.188 03:59:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.188 03:59:52 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:38.188 03:59:52 -- setup/common.sh@32 -- # continue 00:02:38.188 03:59:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.188 03:59:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.188 03:59:52 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:38.188 03:59:52 -- setup/common.sh@32 -- # continue 00:02:38.188 03:59:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.188 03:59:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.188 03:59:52 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:38.188 03:59:52 -- setup/common.sh@32 -- # continue 00:02:38.188 03:59:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.188 03:59:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.188 03:59:52 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:38.188 03:59:52 -- setup/common.sh@32 -- # continue 00:02:38.188 03:59:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.188 03:59:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.188 03:59:52 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:38.188 03:59:52 -- setup/common.sh@32 -- # continue 00:02:38.188 03:59:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.188 03:59:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.188 03:59:52 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:38.188 03:59:52 -- setup/common.sh@32 -- # continue 00:02:38.188 03:59:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.188 03:59:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.188 03:59:52 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:38.188 03:59:52 -- setup/common.sh@32 -- # continue 00:02:38.188 03:59:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.188 03:59:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.188 03:59:52 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:38.188 03:59:52 -- setup/common.sh@32 -- # continue 00:02:38.188 03:59:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.188 03:59:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.188 03:59:52 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:38.188 03:59:52 -- setup/common.sh@32 -- # continue 00:02:38.188 03:59:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.188 03:59:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.188 03:59:52 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:38.188 03:59:52 -- setup/common.sh@32 -- # continue 00:02:38.188 03:59:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.188 03:59:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.188 03:59:52 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:38.188 03:59:52 -- setup/common.sh@32 -- # continue 00:02:38.188 03:59:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.188 03:59:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.188 03:59:52 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:38.188 03:59:52 -- setup/common.sh@32 -- # continue 00:02:38.188 03:59:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.188 03:59:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.188 03:59:52 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:38.188 03:59:52 -- setup/common.sh@32 -- # continue 00:02:38.188 03:59:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.188 03:59:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.188 03:59:52 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:38.188 03:59:52 -- setup/common.sh@32 -- # continue 00:02:38.188 03:59:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.188 03:59:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.188 03:59:52 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:38.188 03:59:52 -- setup/common.sh@32 -- # continue 00:02:38.188 03:59:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.188 03:59:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.188 03:59:52 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:38.188 03:59:52 -- setup/common.sh@32 -- # continue 00:02:38.188 03:59:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.188 03:59:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.188 03:59:52 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:38.188 03:59:52 -- setup/common.sh@32 -- # continue 00:02:38.188 03:59:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.188 03:59:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.188 03:59:52 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:38.188 03:59:52 -- setup/common.sh@32 -- # continue 00:02:38.188 03:59:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.188 03:59:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.188 03:59:52 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:38.188 03:59:52 -- setup/common.sh@32 -- # continue 00:02:38.188 03:59:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.188 03:59:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.188 03:59:52 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:38.188 03:59:52 -- setup/common.sh@32 -- # continue 00:02:38.188 03:59:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.188 03:59:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.188 03:59:52 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:38.188 03:59:52 -- setup/common.sh@32 -- # continue 00:02:38.188 03:59:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.188 03:59:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.188 03:59:52 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:38.188 03:59:52 -- setup/common.sh@32 -- # continue 00:02:38.188 03:59:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.188 03:59:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.188 03:59:52 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:38.188 03:59:52 -- setup/common.sh@32 -- # continue 00:02:38.188 03:59:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.188 03:59:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.188 03:59:52 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:38.188 03:59:52 -- setup/common.sh@32 -- # continue 00:02:38.188 03:59:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.188 03:59:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.188 03:59:52 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:38.188 03:59:52 -- setup/common.sh@32 -- # continue 00:02:38.188 03:59:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.188 03:59:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.188 03:59:52 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:38.188 03:59:52 -- setup/common.sh@32 -- # continue 00:02:38.188 03:59:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.188 03:59:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.188 03:59:52 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:38.188 03:59:52 -- setup/common.sh@32 -- # continue 00:02:38.188 03:59:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.188 03:59:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.188 03:59:52 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:38.188 03:59:52 -- setup/common.sh@33 -- # echo 1024 00:02:38.188 03:59:52 -- setup/common.sh@33 -- # return 0 00:02:38.188 03:59:52 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:02:38.188 03:59:52 -- setup/hugepages.sh@112 -- # get_nodes 00:02:38.188 03:59:52 -- setup/hugepages.sh@27 -- # local node 00:02:38.188 03:59:52 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:38.188 03:59:52 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:02:38.188 03:59:52 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:38.188 03:59:52 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:02:38.188 03:59:52 -- setup/hugepages.sh@32 -- # no_nodes=2 00:02:38.188 03:59:52 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:02:38.188 03:59:52 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:02:38.188 03:59:52 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:02:38.188 03:59:52 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:02:38.188 03:59:52 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:38.188 03:59:52 -- setup/common.sh@18 -- # local node=0 00:02:38.188 03:59:52 -- setup/common.sh@19 -- # local var val 00:02:38.188 03:59:52 -- setup/common.sh@20 -- # local mem_f mem 00:02:38.188 03:59:52 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:38.188 03:59:52 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:02:38.188 03:59:52 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:02:38.188 03:59:52 -- setup/common.sh@28 -- # mapfile -t mem 00:02:38.188 03:59:52 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:38.188 03:59:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.188 03:59:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.188 03:59:52 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 131816228 kB' 'MemFree: 118527316 kB' 'MemUsed: 13288912 kB' 'SwapCached: 0 kB' 'Active: 5633464 kB' 'Inactive: 3992136 kB' 'Active(anon): 5220588 kB' 'Inactive(anon): 0 kB' 'Active(file): 412876 kB' 'Inactive(file): 3992136 kB' 'Unevictable: 9000 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 9295664 kB' 'Mapped: 163488 kB' 'AnonPages: 339072 kB' 'Shmem: 4890652 kB' 'KernelStack: 14040 kB' 'PageTables: 7732 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 211376 kB' 'Slab: 549096 kB' 'SReclaimable: 211376 kB' 'SUnreclaim: 337720 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:02:38.188 03:59:52 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.188 03:59:52 -- setup/common.sh@32 -- # continue 00:02:38.188 03:59:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.188 03:59:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.188 03:59:52 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.188 03:59:52 -- setup/common.sh@32 -- # continue 00:02:38.188 03:59:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.188 03:59:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.188 03:59:52 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.188 03:59:52 -- setup/common.sh@32 -- # continue 00:02:38.188 03:59:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.188 03:59:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.188 03:59:52 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.188 03:59:52 -- setup/common.sh@32 -- # continue 00:02:38.188 03:59:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.188 03:59:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.188 03:59:52 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.188 03:59:52 -- setup/common.sh@32 -- # continue 00:02:38.188 03:59:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.188 03:59:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.188 03:59:52 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.188 03:59:52 -- setup/common.sh@32 -- # continue 00:02:38.188 03:59:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.188 03:59:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.188 03:59:52 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.188 03:59:52 -- setup/common.sh@32 -- # continue 00:02:38.188 03:59:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.188 03:59:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.188 03:59:52 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.188 03:59:52 -- setup/common.sh@32 -- # continue 00:02:38.188 03:59:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.188 03:59:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.188 03:59:52 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.188 03:59:52 -- setup/common.sh@32 -- # continue 00:02:38.188 03:59:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.188 03:59:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.188 03:59:52 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.188 03:59:52 -- setup/common.sh@32 -- # continue 00:02:38.188 03:59:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.188 03:59:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.188 03:59:52 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.188 03:59:52 -- setup/common.sh@32 -- # continue 00:02:38.188 03:59:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.188 03:59:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.188 03:59:52 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.188 03:59:52 -- setup/common.sh@32 -- # continue 00:02:38.188 03:59:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.188 03:59:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.188 03:59:52 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.188 03:59:52 -- setup/common.sh@32 -- # continue 00:02:38.188 03:59:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.188 03:59:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.188 03:59:52 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.188 03:59:52 -- setup/common.sh@32 -- # continue 00:02:38.188 03:59:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.188 03:59:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.188 03:59:52 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.188 03:59:52 -- setup/common.sh@32 -- # continue 00:02:38.188 03:59:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.188 03:59:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.188 03:59:52 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.450 03:59:52 -- setup/common.sh@32 -- # continue 00:02:38.450 03:59:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.450 03:59:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.450 03:59:52 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.450 03:59:52 -- setup/common.sh@32 -- # continue 00:02:38.450 03:59:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.450 03:59:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.450 03:59:52 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.450 03:59:52 -- setup/common.sh@32 -- # continue 00:02:38.450 03:59:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.450 03:59:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.450 03:59:52 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.450 03:59:52 -- setup/common.sh@32 -- # continue 00:02:38.450 03:59:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.450 03:59:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.450 03:59:52 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.450 03:59:52 -- setup/common.sh@32 -- # continue 00:02:38.450 03:59:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.450 03:59:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.450 03:59:52 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.450 03:59:52 -- setup/common.sh@32 -- # continue 00:02:38.450 03:59:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.450 03:59:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.450 03:59:52 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.450 03:59:52 -- setup/common.sh@32 -- # continue 00:02:38.450 03:59:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.450 03:59:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.450 03:59:52 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.450 03:59:52 -- setup/common.sh@32 -- # continue 00:02:38.450 03:59:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.450 03:59:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.450 03:59:52 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.450 03:59:52 -- setup/common.sh@32 -- # continue 00:02:38.450 03:59:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.450 03:59:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.450 03:59:52 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.450 03:59:52 -- setup/common.sh@32 -- # continue 00:02:38.450 03:59:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.450 03:59:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.450 03:59:52 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.450 03:59:52 -- setup/common.sh@32 -- # continue 00:02:38.450 03:59:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.450 03:59:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.450 03:59:52 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.450 03:59:52 -- setup/common.sh@32 -- # continue 00:02:38.450 03:59:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.450 03:59:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.450 03:59:52 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.450 03:59:52 -- setup/common.sh@32 -- # continue 00:02:38.450 03:59:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.450 03:59:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.450 03:59:52 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.450 03:59:52 -- setup/common.sh@32 -- # continue 00:02:38.450 03:59:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.450 03:59:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.450 03:59:52 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.450 03:59:52 -- setup/common.sh@32 -- # continue 00:02:38.450 03:59:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.450 03:59:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.450 03:59:52 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.450 03:59:52 -- setup/common.sh@32 -- # continue 00:02:38.450 03:59:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.450 03:59:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.450 03:59:52 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.450 03:59:52 -- setup/common.sh@32 -- # continue 00:02:38.450 03:59:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.450 03:59:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.450 03:59:52 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.450 03:59:52 -- setup/common.sh@32 -- # continue 00:02:38.450 03:59:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.450 03:59:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.450 03:59:52 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.450 03:59:52 -- setup/common.sh@32 -- # continue 00:02:38.450 03:59:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.450 03:59:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.450 03:59:52 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.450 03:59:52 -- setup/common.sh@32 -- # continue 00:02:38.450 03:59:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.450 03:59:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.450 03:59:52 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.450 03:59:52 -- setup/common.sh@32 -- # continue 00:02:38.450 03:59:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.450 03:59:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.450 03:59:52 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.450 03:59:52 -- setup/common.sh@33 -- # echo 0 00:02:38.450 03:59:52 -- setup/common.sh@33 -- # return 0 00:02:38.450 03:59:52 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:02:38.450 03:59:52 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:02:38.451 03:59:52 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:02:38.451 03:59:52 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:02:38.451 03:59:52 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:02:38.451 node0=1024 expecting 1024 00:02:38.451 03:59:52 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:02:38.451 00:02:38.451 real 0m5.604s 00:02:38.451 user 0m1.131s 00:02:38.451 sys 0m2.096s 00:02:38.451 03:59:52 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:02:38.451 03:59:52 -- common/autotest_common.sh@10 -- # set +x 00:02:38.451 ************************************ 00:02:38.451 END TEST default_setup 00:02:38.451 ************************************ 00:02:38.451 03:59:52 -- setup/hugepages.sh@211 -- # run_test per_node_1G_alloc per_node_1G_alloc 00:02:38.451 03:59:52 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:02:38.451 03:59:52 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:02:38.451 03:59:52 -- common/autotest_common.sh@10 -- # set +x 00:02:38.451 ************************************ 00:02:38.451 START TEST per_node_1G_alloc 00:02:38.451 ************************************ 00:02:38.451 03:59:52 -- common/autotest_common.sh@1104 -- # per_node_1G_alloc 00:02:38.451 03:59:52 -- setup/hugepages.sh@143 -- # local IFS=, 00:02:38.451 03:59:52 -- setup/hugepages.sh@145 -- # get_test_nr_hugepages 1048576 0 1 00:02:38.451 03:59:52 -- setup/hugepages.sh@49 -- # local size=1048576 00:02:38.451 03:59:52 -- setup/hugepages.sh@50 -- # (( 3 > 1 )) 00:02:38.451 03:59:52 -- setup/hugepages.sh@51 -- # shift 00:02:38.451 03:59:52 -- setup/hugepages.sh@52 -- # node_ids=('0' '1') 00:02:38.451 03:59:52 -- setup/hugepages.sh@52 -- # local node_ids 00:02:38.451 03:59:52 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:02:38.451 03:59:52 -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:02:38.451 03:59:52 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 1 00:02:38.451 03:59:52 -- setup/hugepages.sh@62 -- # user_nodes=('0' '1') 00:02:38.451 03:59:52 -- setup/hugepages.sh@62 -- # local user_nodes 00:02:38.451 03:59:52 -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:02:38.451 03:59:52 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:02:38.451 03:59:52 -- setup/hugepages.sh@67 -- # nodes_test=() 00:02:38.451 03:59:52 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:02:38.451 03:59:52 -- setup/hugepages.sh@69 -- # (( 2 > 0 )) 00:02:38.451 03:59:52 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:02:38.451 03:59:52 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:02:38.451 03:59:52 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:02:38.451 03:59:52 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:02:38.451 03:59:52 -- setup/hugepages.sh@73 -- # return 0 00:02:38.451 03:59:52 -- setup/hugepages.sh@146 -- # NRHUGE=512 00:02:38.451 03:59:52 -- setup/hugepages.sh@146 -- # HUGENODE=0,1 00:02:38.451 03:59:52 -- setup/hugepages.sh@146 -- # setup output 00:02:38.451 03:59:52 -- setup/common.sh@9 -- # [[ output == output ]] 00:02:38.451 03:59:52 -- setup/common.sh@10 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/setup.sh 00:02:40.994 0000:74:02.0 (8086 0cfe): Already using the vfio-pci driver 00:02:40.994 0000:c9:00.0 (8086 0a54): Already using the vfio-pci driver 00:02:40.994 0000:f1:02.0 (8086 0cfe): Already using the vfio-pci driver 00:02:40.994 0000:79:02.0 (8086 0cfe): Already using the vfio-pci driver 00:02:40.994 0000:6f:01.0 (8086 0b25): Already using the vfio-pci driver 00:02:40.994 0000:6f:02.0 (8086 0cfe): Already using the vfio-pci driver 00:02:40.994 0000:f6:01.0 (8086 0b25): Already using the vfio-pci driver 00:02:40.994 0000:f6:02.0 (8086 0cfe): Already using the vfio-pci driver 00:02:40.994 0000:74:01.0 (8086 0b25): Already using the vfio-pci driver 00:02:40.994 0000:6a:02.0 (8086 0cfe): Already using the vfio-pci driver 00:02:40.994 0000:79:01.0 (8086 0b25): Already using the vfio-pci driver 00:02:40.994 0000:ec:01.0 (8086 0b25): Already using the vfio-pci driver 00:02:40.994 0000:6a:01.0 (8086 0b25): Already using the vfio-pci driver 00:02:40.994 0000:ec:02.0 (8086 0cfe): Already using the vfio-pci driver 00:02:40.994 0000:ca:00.0 (8086 0a54): Already using the vfio-pci driver 00:02:40.994 0000:e7:01.0 (8086 0b25): Already using the vfio-pci driver 00:02:40.994 0000:e7:02.0 (8086 0cfe): Already using the vfio-pci driver 00:02:40.994 0000:f1:01.0 (8086 0b25): Already using the vfio-pci driver 00:02:41.258 03:59:55 -- setup/hugepages.sh@147 -- # nr_hugepages=1024 00:02:41.258 03:59:55 -- setup/hugepages.sh@147 -- # verify_nr_hugepages 00:02:41.258 03:59:55 -- setup/hugepages.sh@89 -- # local node 00:02:41.258 03:59:55 -- setup/hugepages.sh@90 -- # local sorted_t 00:02:41.258 03:59:55 -- setup/hugepages.sh@91 -- # local sorted_s 00:02:41.258 03:59:55 -- setup/hugepages.sh@92 -- # local surp 00:02:41.258 03:59:55 -- setup/hugepages.sh@93 -- # local resv 00:02:41.258 03:59:55 -- setup/hugepages.sh@94 -- # local anon 00:02:41.258 03:59:55 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:02:41.258 03:59:55 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:02:41.258 03:59:55 -- setup/common.sh@17 -- # local get=AnonHugePages 00:02:41.258 03:59:55 -- setup/common.sh@18 -- # local node= 00:02:41.258 03:59:55 -- setup/common.sh@19 -- # local var val 00:02:41.258 03:59:55 -- setup/common.sh@20 -- # local mem_f mem 00:02:41.258 03:59:55 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:41.258 03:59:55 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:41.258 03:59:55 -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:41.258 03:59:55 -- setup/common.sh@28 -- # mapfile -t mem 00:02:41.258 03:59:55 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:41.258 03:59:55 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.258 03:59:55 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.258 03:59:55 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 258558476 kB' 'MemFree: 241692004 kB' 'MemAvailable: 245328848 kB' 'Buffers: 2696 kB' 'Cached: 10893008 kB' 'SwapCached: 0 kB' 'Active: 7041120 kB' 'Inactive: 4390304 kB' 'Active(anon): 6470304 kB' 'Inactive(anon): 0 kB' 'Active(file): 570816 kB' 'Inactive(file): 4390304 kB' 'Unevictable: 9000 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 544676 kB' 'Mapped: 211708 kB' 'Shmem: 5934584 kB' 'KReclaimable: 309552 kB' 'Slab: 955204 kB' 'SReclaimable: 309552 kB' 'SUnreclaim: 645652 kB' 'KernelStack: 24736 kB' 'PageTables: 9944 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 136619264 kB' 'Committed_AS: 8112240 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 329636 kB' 'VmallocChunk: 0 kB' 'Percpu: 104448 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3262528 kB' 'DirectMap2M: 16437248 kB' 'DirectMap1G: 250609664 kB' 00:02:41.258 03:59:55 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:41.258 03:59:55 -- setup/common.sh@32 -- # continue 00:02:41.258 03:59:55 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.258 03:59:55 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.258 03:59:55 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:41.258 03:59:55 -- setup/common.sh@32 -- # continue 00:02:41.258 03:59:55 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.258 03:59:55 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.258 03:59:55 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:41.258 03:59:55 -- setup/common.sh@32 -- # continue 00:02:41.258 03:59:55 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.259 03:59:55 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.259 03:59:55 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:41.259 03:59:55 -- setup/common.sh@32 -- # continue 00:02:41.259 03:59:55 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.259 03:59:55 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.259 03:59:55 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:41.259 03:59:55 -- setup/common.sh@32 -- # continue 00:02:41.259 03:59:55 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.259 03:59:55 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.259 03:59:55 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:41.259 03:59:55 -- setup/common.sh@32 -- # continue 00:02:41.259 03:59:55 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.259 03:59:55 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.259 03:59:55 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:41.259 03:59:55 -- setup/common.sh@32 -- # continue 00:02:41.259 03:59:55 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.259 03:59:55 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.259 03:59:55 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:41.259 03:59:55 -- setup/common.sh@32 -- # continue 00:02:41.259 03:59:55 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.259 03:59:55 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.259 03:59:55 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:41.259 03:59:55 -- setup/common.sh@32 -- # continue 00:02:41.259 03:59:55 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.259 03:59:55 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.259 03:59:55 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:41.259 03:59:55 -- setup/common.sh@32 -- # continue 00:02:41.259 03:59:55 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.259 03:59:55 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.259 03:59:55 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:41.259 03:59:55 -- setup/common.sh@32 -- # continue 00:02:41.259 03:59:55 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.259 03:59:55 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.259 03:59:55 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:41.259 03:59:55 -- setup/common.sh@32 -- # continue 00:02:41.259 03:59:55 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.259 03:59:55 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.259 03:59:55 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:41.259 03:59:55 -- setup/common.sh@32 -- # continue 00:02:41.259 03:59:55 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.259 03:59:55 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.259 03:59:55 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:41.259 03:59:55 -- setup/common.sh@32 -- # continue 00:02:41.259 03:59:55 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.259 03:59:55 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.259 03:59:55 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:41.259 03:59:55 -- setup/common.sh@32 -- # continue 00:02:41.259 03:59:55 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.259 03:59:55 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.259 03:59:55 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:41.259 03:59:55 -- setup/common.sh@32 -- # continue 00:02:41.259 03:59:55 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.259 03:59:55 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.259 03:59:55 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:41.259 03:59:55 -- setup/common.sh@32 -- # continue 00:02:41.259 03:59:55 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.259 03:59:55 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.259 03:59:55 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:41.259 03:59:55 -- setup/common.sh@32 -- # continue 00:02:41.259 03:59:55 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.259 03:59:55 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.259 03:59:55 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:41.259 03:59:55 -- setup/common.sh@32 -- # continue 00:02:41.259 03:59:55 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.259 03:59:55 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.259 03:59:55 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:41.259 03:59:55 -- setup/common.sh@32 -- # continue 00:02:41.259 03:59:55 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.259 03:59:55 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.259 03:59:55 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:41.259 03:59:55 -- setup/common.sh@32 -- # continue 00:02:41.259 03:59:55 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.259 03:59:55 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.259 03:59:55 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:41.259 03:59:55 -- setup/common.sh@32 -- # continue 00:02:41.259 03:59:55 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.259 03:59:55 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.259 03:59:55 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:41.259 03:59:55 -- setup/common.sh@32 -- # continue 00:02:41.259 03:59:55 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.259 03:59:55 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.259 03:59:55 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:41.259 03:59:55 -- setup/common.sh@32 -- # continue 00:02:41.259 03:59:55 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.259 03:59:55 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.259 03:59:55 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:41.259 03:59:55 -- setup/common.sh@32 -- # continue 00:02:41.259 03:59:55 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.259 03:59:55 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.259 03:59:55 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:41.259 03:59:55 -- setup/common.sh@32 -- # continue 00:02:41.259 03:59:55 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.259 03:59:55 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.259 03:59:55 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:41.259 03:59:55 -- setup/common.sh@32 -- # continue 00:02:41.259 03:59:55 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.259 03:59:55 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.259 03:59:55 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:41.259 03:59:55 -- setup/common.sh@32 -- # continue 00:02:41.259 03:59:55 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.259 03:59:55 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.259 03:59:55 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:41.259 03:59:55 -- setup/common.sh@32 -- # continue 00:02:41.259 03:59:55 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.259 03:59:55 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.259 03:59:55 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:41.259 03:59:55 -- setup/common.sh@32 -- # continue 00:02:41.259 03:59:55 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.259 03:59:55 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.259 03:59:55 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:41.259 03:59:55 -- setup/common.sh@32 -- # continue 00:02:41.259 03:59:55 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.259 03:59:55 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.259 03:59:55 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:41.259 03:59:55 -- setup/common.sh@32 -- # continue 00:02:41.259 03:59:55 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.259 03:59:55 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.259 03:59:55 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:41.259 03:59:55 -- setup/common.sh@32 -- # continue 00:02:41.259 03:59:55 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.259 03:59:55 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.259 03:59:55 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:41.259 03:59:55 -- setup/common.sh@32 -- # continue 00:02:41.259 03:59:55 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.259 03:59:55 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.259 03:59:55 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:41.259 03:59:55 -- setup/common.sh@32 -- # continue 00:02:41.259 03:59:55 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.259 03:59:55 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.259 03:59:55 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:41.259 03:59:55 -- setup/common.sh@32 -- # continue 00:02:41.259 03:59:55 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.259 03:59:55 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.259 03:59:55 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:41.259 03:59:55 -- setup/common.sh@32 -- # continue 00:02:41.259 03:59:55 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.259 03:59:55 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.259 03:59:55 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:41.259 03:59:55 -- setup/common.sh@32 -- # continue 00:02:41.259 03:59:55 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.259 03:59:55 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.259 03:59:55 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:41.259 03:59:55 -- setup/common.sh@32 -- # continue 00:02:41.259 03:59:55 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.259 03:59:55 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.259 03:59:55 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:41.259 03:59:55 -- setup/common.sh@32 -- # continue 00:02:41.259 03:59:55 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.259 03:59:55 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.259 03:59:55 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:41.259 03:59:55 -- setup/common.sh@33 -- # echo 0 00:02:41.259 03:59:55 -- setup/common.sh@33 -- # return 0 00:02:41.259 03:59:55 -- setup/hugepages.sh@97 -- # anon=0 00:02:41.259 03:59:55 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:02:41.259 03:59:55 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:41.259 03:59:55 -- setup/common.sh@18 -- # local node= 00:02:41.259 03:59:55 -- setup/common.sh@19 -- # local var val 00:02:41.259 03:59:55 -- setup/common.sh@20 -- # local mem_f mem 00:02:41.259 03:59:55 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:41.259 03:59:55 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:41.259 03:59:55 -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:41.259 03:59:55 -- setup/common.sh@28 -- # mapfile -t mem 00:02:41.259 03:59:55 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:41.259 03:59:55 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.259 03:59:55 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.260 03:59:55 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 258558476 kB' 'MemFree: 241697136 kB' 'MemAvailable: 245333980 kB' 'Buffers: 2696 kB' 'Cached: 10893024 kB' 'SwapCached: 0 kB' 'Active: 7041424 kB' 'Inactive: 4390304 kB' 'Active(anon): 6470608 kB' 'Inactive(anon): 0 kB' 'Active(file): 570816 kB' 'Inactive(file): 4390304 kB' 'Unevictable: 9000 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 545040 kB' 'Mapped: 211708 kB' 'Shmem: 5934600 kB' 'KReclaimable: 309552 kB' 'Slab: 955188 kB' 'SReclaimable: 309552 kB' 'SUnreclaim: 645636 kB' 'KernelStack: 24624 kB' 'PageTables: 9640 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 136619264 kB' 'Committed_AS: 8111480 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 329572 kB' 'VmallocChunk: 0 kB' 'Percpu: 104448 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3262528 kB' 'DirectMap2M: 16437248 kB' 'DirectMap1G: 250609664 kB' 00:02:41.260 03:59:55 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.260 03:59:55 -- setup/common.sh@32 -- # continue 00:02:41.260 03:59:55 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.260 03:59:55 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.260 03:59:55 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.260 03:59:55 -- setup/common.sh@32 -- # continue 00:02:41.260 03:59:55 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.260 03:59:55 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.260 03:59:55 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.260 03:59:55 -- setup/common.sh@32 -- # continue 00:02:41.260 03:59:55 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.260 03:59:55 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.260 03:59:55 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.260 03:59:55 -- setup/common.sh@32 -- # continue 00:02:41.260 03:59:55 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.260 03:59:55 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.260 03:59:55 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.260 03:59:55 -- setup/common.sh@32 -- # continue 00:02:41.260 03:59:55 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.260 03:59:55 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.260 03:59:55 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.260 03:59:55 -- setup/common.sh@32 -- # continue 00:02:41.260 03:59:55 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.260 03:59:55 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.260 03:59:55 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.260 03:59:55 -- setup/common.sh@32 -- # continue 00:02:41.260 03:59:55 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.260 03:59:55 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.260 03:59:55 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.260 03:59:55 -- setup/common.sh@32 -- # continue 00:02:41.260 03:59:55 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.260 03:59:55 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.260 03:59:55 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.260 03:59:55 -- setup/common.sh@32 -- # continue 00:02:41.260 03:59:55 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.260 03:59:55 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.260 03:59:55 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.260 03:59:55 -- setup/common.sh@32 -- # continue 00:02:41.260 03:59:55 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.260 03:59:55 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.260 03:59:55 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.260 03:59:55 -- setup/common.sh@32 -- # continue 00:02:41.260 03:59:55 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.260 03:59:55 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.260 03:59:55 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.260 03:59:55 -- setup/common.sh@32 -- # continue 00:02:41.260 03:59:55 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.260 03:59:55 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.260 03:59:55 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.260 03:59:55 -- setup/common.sh@32 -- # continue 00:02:41.260 03:59:55 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.260 03:59:55 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.260 03:59:55 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.260 03:59:55 -- setup/common.sh@32 -- # continue 00:02:41.260 03:59:55 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.260 03:59:55 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.260 03:59:55 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.260 03:59:55 -- setup/common.sh@32 -- # continue 00:02:41.260 03:59:55 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.260 03:59:55 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.260 03:59:55 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.260 03:59:55 -- setup/common.sh@32 -- # continue 00:02:41.260 03:59:55 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.260 03:59:55 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.260 03:59:55 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.260 03:59:55 -- setup/common.sh@32 -- # continue 00:02:41.260 03:59:55 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.260 03:59:55 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.260 03:59:55 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.260 03:59:55 -- setup/common.sh@32 -- # continue 00:02:41.260 03:59:55 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.260 03:59:55 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.260 03:59:55 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.260 03:59:55 -- setup/common.sh@32 -- # continue 00:02:41.260 03:59:55 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.260 03:59:55 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.260 03:59:55 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.260 03:59:55 -- setup/common.sh@32 -- # continue 00:02:41.260 03:59:55 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.260 03:59:55 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.260 03:59:55 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.260 03:59:55 -- setup/common.sh@32 -- # continue 00:02:41.260 03:59:55 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.260 03:59:55 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.260 03:59:55 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.260 03:59:55 -- setup/common.sh@32 -- # continue 00:02:41.260 03:59:55 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.260 03:59:55 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.260 03:59:55 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.260 03:59:55 -- setup/common.sh@32 -- # continue 00:02:41.260 03:59:55 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.260 03:59:55 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.260 03:59:55 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.260 03:59:55 -- setup/common.sh@32 -- # continue 00:02:41.260 03:59:55 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.260 03:59:55 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.260 03:59:55 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.260 03:59:55 -- setup/common.sh@32 -- # continue 00:02:41.260 03:59:55 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.260 03:59:55 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.260 03:59:55 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.260 03:59:55 -- setup/common.sh@32 -- # continue 00:02:41.260 03:59:55 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.260 03:59:55 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.260 03:59:55 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.260 03:59:55 -- setup/common.sh@32 -- # continue 00:02:41.260 03:59:55 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.260 03:59:55 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.260 03:59:55 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.260 03:59:55 -- setup/common.sh@32 -- # continue 00:02:41.260 03:59:55 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.260 03:59:55 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.260 03:59:55 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.260 03:59:55 -- setup/common.sh@32 -- # continue 00:02:41.260 03:59:55 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.260 03:59:55 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.260 03:59:55 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.260 03:59:55 -- setup/common.sh@32 -- # continue 00:02:41.260 03:59:55 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.260 03:59:55 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.260 03:59:55 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.260 03:59:55 -- setup/common.sh@32 -- # continue 00:02:41.260 03:59:55 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.260 03:59:55 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.260 03:59:55 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.260 03:59:55 -- setup/common.sh@32 -- # continue 00:02:41.260 03:59:55 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.260 03:59:55 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.260 03:59:55 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.260 03:59:55 -- setup/common.sh@32 -- # continue 00:02:41.260 03:59:55 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.260 03:59:55 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.260 03:59:55 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.260 03:59:55 -- setup/common.sh@32 -- # continue 00:02:41.260 03:59:55 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.260 03:59:55 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.260 03:59:55 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.260 03:59:55 -- setup/common.sh@32 -- # continue 00:02:41.260 03:59:55 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.260 03:59:55 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.260 03:59:55 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.260 03:59:55 -- setup/common.sh@32 -- # continue 00:02:41.260 03:59:55 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.260 03:59:55 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.260 03:59:55 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.260 03:59:55 -- setup/common.sh@32 -- # continue 00:02:41.260 03:59:55 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.260 03:59:55 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.260 03:59:55 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.260 03:59:55 -- setup/common.sh@32 -- # continue 00:02:41.260 03:59:55 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.260 03:59:55 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.260 03:59:55 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.260 03:59:55 -- setup/common.sh@32 -- # continue 00:02:41.260 03:59:55 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.260 03:59:55 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.261 03:59:55 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.261 03:59:55 -- setup/common.sh@32 -- # continue 00:02:41.261 03:59:55 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.261 03:59:55 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.261 03:59:55 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.261 03:59:55 -- setup/common.sh@32 -- # continue 00:02:41.261 03:59:55 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.261 03:59:55 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.261 03:59:55 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.261 03:59:55 -- setup/common.sh@32 -- # continue 00:02:41.261 03:59:55 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.261 03:59:55 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.261 03:59:55 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.261 03:59:55 -- setup/common.sh@32 -- # continue 00:02:41.261 03:59:55 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.261 03:59:55 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.261 03:59:55 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.261 03:59:55 -- setup/common.sh@32 -- # continue 00:02:41.261 03:59:55 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.261 03:59:55 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.261 03:59:55 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.261 03:59:55 -- setup/common.sh@32 -- # continue 00:02:41.261 03:59:55 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.261 03:59:55 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.261 03:59:55 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.261 03:59:55 -- setup/common.sh@32 -- # continue 00:02:41.261 03:59:55 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.261 03:59:55 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.261 03:59:55 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.261 03:59:55 -- setup/common.sh@32 -- # continue 00:02:41.261 03:59:55 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.261 03:59:55 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.261 03:59:55 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.261 03:59:55 -- setup/common.sh@32 -- # continue 00:02:41.261 03:59:55 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.261 03:59:55 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.261 03:59:55 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.261 03:59:55 -- setup/common.sh@32 -- # continue 00:02:41.261 03:59:55 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.261 03:59:55 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.261 03:59:55 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.261 03:59:55 -- setup/common.sh@32 -- # continue 00:02:41.261 03:59:55 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.261 03:59:55 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.261 03:59:55 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.261 03:59:55 -- setup/common.sh@32 -- # continue 00:02:41.261 03:59:55 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.261 03:59:55 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.261 03:59:55 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.261 03:59:55 -- setup/common.sh@33 -- # echo 0 00:02:41.261 03:59:55 -- setup/common.sh@33 -- # return 0 00:02:41.261 03:59:55 -- setup/hugepages.sh@99 -- # surp=0 00:02:41.261 03:59:55 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:02:41.261 03:59:55 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:02:41.261 03:59:55 -- setup/common.sh@18 -- # local node= 00:02:41.261 03:59:55 -- setup/common.sh@19 -- # local var val 00:02:41.261 03:59:55 -- setup/common.sh@20 -- # local mem_f mem 00:02:41.261 03:59:55 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:41.261 03:59:55 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:41.261 03:59:55 -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:41.261 03:59:55 -- setup/common.sh@28 -- # mapfile -t mem 00:02:41.261 03:59:55 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:41.261 03:59:55 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.261 03:59:55 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.261 03:59:55 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 258558476 kB' 'MemFree: 241697344 kB' 'MemAvailable: 245334188 kB' 'Buffers: 2696 kB' 'Cached: 10893036 kB' 'SwapCached: 0 kB' 'Active: 7040004 kB' 'Inactive: 4390304 kB' 'Active(anon): 6469188 kB' 'Inactive(anon): 0 kB' 'Active(file): 570816 kB' 'Inactive(file): 4390304 kB' 'Unevictable: 9000 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 543960 kB' 'Mapped: 211600 kB' 'Shmem: 5934612 kB' 'KReclaimable: 309552 kB' 'Slab: 955156 kB' 'SReclaimable: 309552 kB' 'SUnreclaim: 645604 kB' 'KernelStack: 24608 kB' 'PageTables: 9576 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 136619264 kB' 'Committed_AS: 8111128 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 329540 kB' 'VmallocChunk: 0 kB' 'Percpu: 104448 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3262528 kB' 'DirectMap2M: 16437248 kB' 'DirectMap1G: 250609664 kB' 00:02:41.261 03:59:55 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:41.261 03:59:55 -- setup/common.sh@32 -- # continue 00:02:41.261 03:59:55 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.261 03:59:55 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.261 03:59:55 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:41.261 03:59:55 -- setup/common.sh@32 -- # continue 00:02:41.261 03:59:55 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.261 03:59:55 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.261 03:59:55 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:41.261 03:59:55 -- setup/common.sh@32 -- # continue 00:02:41.261 03:59:55 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.261 03:59:55 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.261 03:59:55 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:41.261 03:59:55 -- setup/common.sh@32 -- # continue 00:02:41.261 03:59:55 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.261 03:59:55 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.261 03:59:55 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:41.261 03:59:55 -- setup/common.sh@32 -- # continue 00:02:41.261 03:59:55 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.261 03:59:55 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.261 03:59:55 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:41.261 03:59:55 -- setup/common.sh@32 -- # continue 00:02:41.261 03:59:55 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.261 03:59:55 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.261 03:59:55 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:41.261 03:59:55 -- setup/common.sh@32 -- # continue 00:02:41.261 03:59:55 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.261 03:59:55 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.261 03:59:55 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:41.261 03:59:55 -- setup/common.sh@32 -- # continue 00:02:41.261 03:59:55 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.261 03:59:55 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.261 03:59:55 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:41.261 03:59:55 -- setup/common.sh@32 -- # continue 00:02:41.261 03:59:55 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.261 03:59:55 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.261 03:59:55 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:41.261 03:59:55 -- setup/common.sh@32 -- # continue 00:02:41.261 03:59:55 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.261 03:59:55 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.261 03:59:55 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:41.261 03:59:55 -- setup/common.sh@32 -- # continue 00:02:41.261 03:59:55 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.261 03:59:55 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.261 03:59:55 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:41.261 03:59:55 -- setup/common.sh@32 -- # continue 00:02:41.261 03:59:55 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.261 03:59:55 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.261 03:59:55 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:41.261 03:59:55 -- setup/common.sh@32 -- # continue 00:02:41.261 03:59:55 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.261 03:59:55 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.261 03:59:55 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:41.261 03:59:55 -- setup/common.sh@32 -- # continue 00:02:41.261 03:59:55 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.261 03:59:55 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.261 03:59:55 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:41.261 03:59:55 -- setup/common.sh@32 -- # continue 00:02:41.261 03:59:55 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.261 03:59:55 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.261 03:59:55 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:41.261 03:59:55 -- setup/common.sh@32 -- # continue 00:02:41.261 03:59:55 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.261 03:59:55 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.261 03:59:55 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:41.261 03:59:55 -- setup/common.sh@32 -- # continue 00:02:41.261 03:59:55 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.261 03:59:55 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.261 03:59:55 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:41.261 03:59:55 -- setup/common.sh@32 -- # continue 00:02:41.261 03:59:55 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.261 03:59:55 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.261 03:59:55 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:41.261 03:59:55 -- setup/common.sh@32 -- # continue 00:02:41.261 03:59:55 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.261 03:59:55 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.261 03:59:55 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:41.261 03:59:55 -- setup/common.sh@32 -- # continue 00:02:41.261 03:59:55 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.261 03:59:55 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.261 03:59:55 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:41.261 03:59:55 -- setup/common.sh@32 -- # continue 00:02:41.261 03:59:55 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.261 03:59:55 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.261 03:59:55 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:41.261 03:59:55 -- setup/common.sh@32 -- # continue 00:02:41.261 03:59:55 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.262 03:59:55 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.262 03:59:55 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:41.262 03:59:55 -- setup/common.sh@32 -- # continue 00:02:41.262 03:59:55 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.262 03:59:55 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.262 03:59:55 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:41.262 03:59:55 -- setup/common.sh@32 -- # continue 00:02:41.262 03:59:55 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.262 03:59:55 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.262 03:59:55 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:41.262 03:59:55 -- setup/common.sh@32 -- # continue 00:02:41.262 03:59:55 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.262 03:59:55 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.262 03:59:55 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:41.262 03:59:55 -- setup/common.sh@32 -- # continue 00:02:41.262 03:59:55 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.262 03:59:55 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.262 03:59:55 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:41.262 03:59:55 -- setup/common.sh@32 -- # continue 00:02:41.262 03:59:55 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.262 03:59:55 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.262 03:59:55 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:41.262 03:59:55 -- setup/common.sh@32 -- # continue 00:02:41.262 03:59:55 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.262 03:59:55 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.262 03:59:55 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:41.262 03:59:55 -- setup/common.sh@32 -- # continue 00:02:41.262 03:59:55 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.262 03:59:55 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.262 03:59:55 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:41.262 03:59:55 -- setup/common.sh@32 -- # continue 00:02:41.262 03:59:55 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.262 03:59:55 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.262 03:59:55 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:41.262 03:59:55 -- setup/common.sh@32 -- # continue 00:02:41.262 03:59:55 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.262 03:59:55 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.262 03:59:55 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:41.262 03:59:55 -- setup/common.sh@32 -- # continue 00:02:41.262 03:59:55 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.262 03:59:55 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.262 03:59:55 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:41.262 03:59:55 -- setup/common.sh@32 -- # continue 00:02:41.262 03:59:55 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.262 03:59:55 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.262 03:59:55 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:41.262 03:59:55 -- setup/common.sh@32 -- # continue 00:02:41.262 03:59:55 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.262 03:59:55 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.262 03:59:55 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:41.262 03:59:55 -- setup/common.sh@32 -- # continue 00:02:41.262 03:59:55 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.262 03:59:55 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.262 03:59:55 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:41.262 03:59:55 -- setup/common.sh@32 -- # continue 00:02:41.262 03:59:55 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.262 03:59:55 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.262 03:59:55 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:41.262 03:59:55 -- setup/common.sh@32 -- # continue 00:02:41.262 03:59:55 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.262 03:59:55 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.262 03:59:55 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:41.262 03:59:55 -- setup/common.sh@32 -- # continue 00:02:41.262 03:59:55 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.262 03:59:55 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.262 03:59:55 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:41.262 03:59:55 -- setup/common.sh@32 -- # continue 00:02:41.262 03:59:55 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.262 03:59:55 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.262 03:59:55 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:41.262 03:59:55 -- setup/common.sh@32 -- # continue 00:02:41.262 03:59:55 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.262 03:59:55 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.262 03:59:55 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:41.262 03:59:55 -- setup/common.sh@32 -- # continue 00:02:41.262 03:59:55 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.262 03:59:55 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.262 03:59:55 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:41.262 03:59:55 -- setup/common.sh@32 -- # continue 00:02:41.262 03:59:55 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.262 03:59:55 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.262 03:59:55 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:41.262 03:59:55 -- setup/common.sh@32 -- # continue 00:02:41.262 03:59:55 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.262 03:59:55 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.262 03:59:55 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:41.262 03:59:55 -- setup/common.sh@32 -- # continue 00:02:41.262 03:59:55 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.262 03:59:55 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.262 03:59:55 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:41.262 03:59:55 -- setup/common.sh@32 -- # continue 00:02:41.262 03:59:55 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.262 03:59:55 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.262 03:59:55 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:41.262 03:59:55 -- setup/common.sh@32 -- # continue 00:02:41.262 03:59:55 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.262 03:59:55 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.262 03:59:55 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:41.262 03:59:55 -- setup/common.sh@32 -- # continue 00:02:41.262 03:59:55 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.262 03:59:55 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.262 03:59:55 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:41.262 03:59:55 -- setup/common.sh@32 -- # continue 00:02:41.262 03:59:55 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.262 03:59:55 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.262 03:59:55 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:41.262 03:59:55 -- setup/common.sh@32 -- # continue 00:02:41.262 03:59:55 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.262 03:59:55 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.262 03:59:55 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:41.262 03:59:55 -- setup/common.sh@32 -- # continue 00:02:41.262 03:59:55 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.262 03:59:55 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.262 03:59:55 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:41.262 03:59:55 -- setup/common.sh@33 -- # echo 0 00:02:41.262 03:59:55 -- setup/common.sh@33 -- # return 0 00:02:41.262 03:59:55 -- setup/hugepages.sh@100 -- # resv=0 00:02:41.262 03:59:55 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:02:41.262 nr_hugepages=1024 00:02:41.262 03:59:55 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:02:41.262 resv_hugepages=0 00:02:41.262 03:59:55 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:02:41.262 surplus_hugepages=0 00:02:41.262 03:59:55 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:02:41.262 anon_hugepages=0 00:02:41.262 03:59:55 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:02:41.262 03:59:55 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:02:41.262 03:59:55 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:02:41.262 03:59:55 -- setup/common.sh@17 -- # local get=HugePages_Total 00:02:41.262 03:59:55 -- setup/common.sh@18 -- # local node= 00:02:41.262 03:59:55 -- setup/common.sh@19 -- # local var val 00:02:41.262 03:59:55 -- setup/common.sh@20 -- # local mem_f mem 00:02:41.262 03:59:55 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:41.262 03:59:55 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:41.262 03:59:55 -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:41.262 03:59:55 -- setup/common.sh@28 -- # mapfile -t mem 00:02:41.262 03:59:55 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:41.262 03:59:55 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.262 03:59:55 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.262 03:59:55 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 258558476 kB' 'MemFree: 241697680 kB' 'MemAvailable: 245334524 kB' 'Buffers: 2696 kB' 'Cached: 10893036 kB' 'SwapCached: 0 kB' 'Active: 7039540 kB' 'Inactive: 4390304 kB' 'Active(anon): 6468724 kB' 'Inactive(anon): 0 kB' 'Active(file): 570816 kB' 'Inactive(file): 4390304 kB' 'Unevictable: 9000 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 543516 kB' 'Mapped: 211600 kB' 'Shmem: 5934612 kB' 'KReclaimable: 309552 kB' 'Slab: 955156 kB' 'SReclaimable: 309552 kB' 'SUnreclaim: 645604 kB' 'KernelStack: 24608 kB' 'PageTables: 9580 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 136619264 kB' 'Committed_AS: 8111144 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 329540 kB' 'VmallocChunk: 0 kB' 'Percpu: 104448 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3262528 kB' 'DirectMap2M: 16437248 kB' 'DirectMap1G: 250609664 kB' 00:02:41.262 03:59:55 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:41.262 03:59:55 -- setup/common.sh@32 -- # continue 00:02:41.262 03:59:55 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.262 03:59:55 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.262 03:59:55 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:41.262 03:59:55 -- setup/common.sh@32 -- # continue 00:02:41.262 03:59:55 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.262 03:59:55 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.262 03:59:55 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:41.262 03:59:55 -- setup/common.sh@32 -- # continue 00:02:41.262 03:59:55 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.262 03:59:55 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.262 03:59:55 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:41.262 03:59:55 -- setup/common.sh@32 -- # continue 00:02:41.263 03:59:55 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.263 03:59:55 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.263 03:59:55 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:41.263 03:59:55 -- setup/common.sh@32 -- # continue 00:02:41.263 03:59:55 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.263 03:59:55 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.263 03:59:55 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:41.263 03:59:55 -- setup/common.sh@32 -- # continue 00:02:41.263 03:59:55 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.263 03:59:55 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.263 03:59:55 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:41.263 03:59:55 -- setup/common.sh@32 -- # continue 00:02:41.263 03:59:55 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.263 03:59:55 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.263 03:59:55 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:41.263 03:59:55 -- setup/common.sh@32 -- # continue 00:02:41.263 03:59:55 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.263 03:59:55 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.263 03:59:55 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:41.263 03:59:55 -- setup/common.sh@32 -- # continue 00:02:41.263 03:59:55 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.263 03:59:55 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.263 03:59:55 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:41.263 03:59:55 -- setup/common.sh@32 -- # continue 00:02:41.263 03:59:55 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.263 03:59:55 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.263 03:59:55 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:41.263 03:59:55 -- setup/common.sh@32 -- # continue 00:02:41.263 03:59:55 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.263 03:59:55 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.263 03:59:55 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:41.263 03:59:55 -- setup/common.sh@32 -- # continue 00:02:41.263 03:59:55 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.263 03:59:55 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.263 03:59:55 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:41.263 03:59:55 -- setup/common.sh@32 -- # continue 00:02:41.263 03:59:55 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.263 03:59:55 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.263 03:59:55 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:41.263 03:59:55 -- setup/common.sh@32 -- # continue 00:02:41.263 03:59:55 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.263 03:59:55 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.263 03:59:55 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:41.263 03:59:55 -- setup/common.sh@32 -- # continue 00:02:41.263 03:59:55 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.263 03:59:55 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.263 03:59:55 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:41.263 03:59:55 -- setup/common.sh@32 -- # continue 00:02:41.263 03:59:55 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.263 03:59:55 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.263 03:59:55 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:41.263 03:59:55 -- setup/common.sh@32 -- # continue 00:02:41.263 03:59:55 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.263 03:59:55 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.263 03:59:55 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:41.263 03:59:55 -- setup/common.sh@32 -- # continue 00:02:41.263 03:59:55 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.263 03:59:55 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.263 03:59:55 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:41.263 03:59:55 -- setup/common.sh@32 -- # continue 00:02:41.263 03:59:55 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.263 03:59:55 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.263 03:59:55 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:41.263 03:59:55 -- setup/common.sh@32 -- # continue 00:02:41.263 03:59:55 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.263 03:59:55 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.263 03:59:55 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:41.263 03:59:55 -- setup/common.sh@32 -- # continue 00:02:41.263 03:59:55 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.263 03:59:55 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.263 03:59:55 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:41.263 03:59:55 -- setup/common.sh@32 -- # continue 00:02:41.263 03:59:55 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.263 03:59:55 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.263 03:59:55 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:41.263 03:59:55 -- setup/common.sh@32 -- # continue 00:02:41.263 03:59:55 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.263 03:59:55 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.263 03:59:55 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:41.263 03:59:55 -- setup/common.sh@32 -- # continue 00:02:41.263 03:59:55 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.263 03:59:55 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.263 03:59:55 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:41.263 03:59:55 -- setup/common.sh@32 -- # continue 00:02:41.263 03:59:55 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.263 03:59:55 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.263 03:59:55 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:41.263 03:59:55 -- setup/common.sh@32 -- # continue 00:02:41.263 03:59:55 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.263 03:59:55 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.263 03:59:55 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:41.263 03:59:55 -- setup/common.sh@32 -- # continue 00:02:41.263 03:59:55 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.263 03:59:55 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.263 03:59:55 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:41.263 03:59:55 -- setup/common.sh@32 -- # continue 00:02:41.263 03:59:55 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.263 03:59:55 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.263 03:59:55 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:41.263 03:59:55 -- setup/common.sh@32 -- # continue 00:02:41.263 03:59:55 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.263 03:59:55 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.263 03:59:55 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:41.263 03:59:55 -- setup/common.sh@32 -- # continue 00:02:41.263 03:59:55 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.263 03:59:55 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.263 03:59:55 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:41.263 03:59:55 -- setup/common.sh@32 -- # continue 00:02:41.263 03:59:55 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.263 03:59:55 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.263 03:59:55 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:41.263 03:59:55 -- setup/common.sh@32 -- # continue 00:02:41.263 03:59:55 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.263 03:59:55 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.263 03:59:55 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:41.263 03:59:55 -- setup/common.sh@32 -- # continue 00:02:41.263 03:59:55 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.263 03:59:55 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.263 03:59:55 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:41.263 03:59:55 -- setup/common.sh@32 -- # continue 00:02:41.263 03:59:55 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.263 03:59:55 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.263 03:59:55 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:41.263 03:59:55 -- setup/common.sh@32 -- # continue 00:02:41.263 03:59:55 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.263 03:59:55 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.264 03:59:55 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:41.264 03:59:55 -- setup/common.sh@32 -- # continue 00:02:41.264 03:59:55 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.264 03:59:55 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.264 03:59:55 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:41.264 03:59:55 -- setup/common.sh@32 -- # continue 00:02:41.264 03:59:55 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.264 03:59:55 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.264 03:59:55 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:41.264 03:59:55 -- setup/common.sh@32 -- # continue 00:02:41.264 03:59:55 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.264 03:59:55 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.264 03:59:55 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:41.264 03:59:55 -- setup/common.sh@32 -- # continue 00:02:41.264 03:59:55 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.264 03:59:55 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.264 03:59:55 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:41.264 03:59:55 -- setup/common.sh@32 -- # continue 00:02:41.264 03:59:55 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.264 03:59:55 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.264 03:59:55 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:41.264 03:59:55 -- setup/common.sh@32 -- # continue 00:02:41.264 03:59:55 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.264 03:59:55 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.264 03:59:55 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:41.264 03:59:55 -- setup/common.sh@32 -- # continue 00:02:41.264 03:59:55 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.264 03:59:55 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.264 03:59:55 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:41.264 03:59:55 -- setup/common.sh@32 -- # continue 00:02:41.264 03:59:55 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.264 03:59:55 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.264 03:59:55 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:41.264 03:59:55 -- setup/common.sh@32 -- # continue 00:02:41.264 03:59:55 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.264 03:59:55 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.264 03:59:55 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:41.264 03:59:55 -- setup/common.sh@32 -- # continue 00:02:41.264 03:59:55 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.264 03:59:55 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.264 03:59:55 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:41.264 03:59:55 -- setup/common.sh@32 -- # continue 00:02:41.264 03:59:55 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.264 03:59:55 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.264 03:59:55 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:41.264 03:59:55 -- setup/common.sh@32 -- # continue 00:02:41.264 03:59:55 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.264 03:59:55 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.264 03:59:55 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:41.264 03:59:55 -- setup/common.sh@32 -- # continue 00:02:41.264 03:59:55 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.264 03:59:55 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.264 03:59:55 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:41.264 03:59:55 -- setup/common.sh@33 -- # echo 1024 00:02:41.264 03:59:55 -- setup/common.sh@33 -- # return 0 00:02:41.264 03:59:55 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:02:41.264 03:59:55 -- setup/hugepages.sh@112 -- # get_nodes 00:02:41.264 03:59:55 -- setup/hugepages.sh@27 -- # local node 00:02:41.264 03:59:55 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:41.264 03:59:55 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:02:41.264 03:59:55 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:41.264 03:59:55 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:02:41.264 03:59:55 -- setup/hugepages.sh@32 -- # no_nodes=2 00:02:41.264 03:59:55 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:02:41.264 03:59:55 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:02:41.264 03:59:55 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:02:41.264 03:59:55 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:02:41.264 03:59:55 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:41.264 03:59:55 -- setup/common.sh@18 -- # local node=0 00:02:41.264 03:59:55 -- setup/common.sh@19 -- # local var val 00:02:41.264 03:59:55 -- setup/common.sh@20 -- # local mem_f mem 00:02:41.264 03:59:55 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:41.264 03:59:55 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:02:41.264 03:59:55 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:02:41.264 03:59:55 -- setup/common.sh@28 -- # mapfile -t mem 00:02:41.264 03:59:55 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:41.264 03:59:55 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.264 03:59:55 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.264 03:59:55 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 131816228 kB' 'MemFree: 119544112 kB' 'MemUsed: 12272116 kB' 'SwapCached: 0 kB' 'Active: 5632436 kB' 'Inactive: 3992136 kB' 'Active(anon): 5219560 kB' 'Inactive(anon): 0 kB' 'Active(file): 412876 kB' 'Inactive(file): 3992136 kB' 'Unevictable: 9000 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 9295720 kB' 'Mapped: 163488 kB' 'AnonPages: 337968 kB' 'Shmem: 4890708 kB' 'KernelStack: 13832 kB' 'PageTables: 7216 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 211376 kB' 'Slab: 548664 kB' 'SReclaimable: 211376 kB' 'SUnreclaim: 337288 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:02:41.264 03:59:55 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.264 03:59:55 -- setup/common.sh@32 -- # continue 00:02:41.264 03:59:55 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.264 03:59:55 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.264 03:59:55 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.264 03:59:55 -- setup/common.sh@32 -- # continue 00:02:41.264 03:59:55 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.264 03:59:55 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.264 03:59:55 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.264 03:59:55 -- setup/common.sh@32 -- # continue 00:02:41.264 03:59:55 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.264 03:59:55 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.264 03:59:55 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.264 03:59:55 -- setup/common.sh@32 -- # continue 00:02:41.264 03:59:55 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.264 03:59:55 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.264 03:59:55 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.264 03:59:55 -- setup/common.sh@32 -- # continue 00:02:41.264 03:59:55 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.264 03:59:55 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.264 03:59:55 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.264 03:59:55 -- setup/common.sh@32 -- # continue 00:02:41.264 03:59:55 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.264 03:59:55 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.264 03:59:55 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.264 03:59:55 -- setup/common.sh@32 -- # continue 00:02:41.264 03:59:55 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.264 03:59:55 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.264 03:59:55 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.264 03:59:55 -- setup/common.sh@32 -- # continue 00:02:41.264 03:59:55 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.264 03:59:55 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.264 03:59:55 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.264 03:59:55 -- setup/common.sh@32 -- # continue 00:02:41.264 03:59:55 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.264 03:59:55 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.264 03:59:55 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.264 03:59:55 -- setup/common.sh@32 -- # continue 00:02:41.264 03:59:55 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.264 03:59:55 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.264 03:59:55 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.264 03:59:55 -- setup/common.sh@32 -- # continue 00:02:41.264 03:59:55 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.264 03:59:55 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.264 03:59:55 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.264 03:59:55 -- setup/common.sh@32 -- # continue 00:02:41.264 03:59:55 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.264 03:59:55 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.264 03:59:55 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.264 03:59:55 -- setup/common.sh@32 -- # continue 00:02:41.264 03:59:55 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.264 03:59:55 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.264 03:59:55 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.264 03:59:55 -- setup/common.sh@32 -- # continue 00:02:41.264 03:59:55 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.264 03:59:55 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.264 03:59:55 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.264 03:59:55 -- setup/common.sh@32 -- # continue 00:02:41.264 03:59:55 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.264 03:59:55 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.264 03:59:55 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.264 03:59:55 -- setup/common.sh@32 -- # continue 00:02:41.264 03:59:55 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.264 03:59:55 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.264 03:59:55 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.264 03:59:55 -- setup/common.sh@32 -- # continue 00:02:41.264 03:59:55 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.264 03:59:55 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.264 03:59:55 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.264 03:59:55 -- setup/common.sh@32 -- # continue 00:02:41.264 03:59:55 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.264 03:59:55 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.265 03:59:55 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.265 03:59:55 -- setup/common.sh@32 -- # continue 00:02:41.265 03:59:55 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.265 03:59:55 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.265 03:59:55 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.265 03:59:55 -- setup/common.sh@32 -- # continue 00:02:41.265 03:59:55 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.265 03:59:55 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.265 03:59:55 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.265 03:59:55 -- setup/common.sh@32 -- # continue 00:02:41.265 03:59:55 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.265 03:59:55 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.265 03:59:55 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.265 03:59:55 -- setup/common.sh@32 -- # continue 00:02:41.265 03:59:55 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.265 03:59:55 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.265 03:59:55 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.265 03:59:55 -- setup/common.sh@32 -- # continue 00:02:41.265 03:59:55 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.265 03:59:55 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.265 03:59:55 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.265 03:59:55 -- setup/common.sh@32 -- # continue 00:02:41.265 03:59:55 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.265 03:59:55 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.265 03:59:55 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.265 03:59:55 -- setup/common.sh@32 -- # continue 00:02:41.265 03:59:55 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.265 03:59:55 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.265 03:59:55 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.265 03:59:55 -- setup/common.sh@32 -- # continue 00:02:41.265 03:59:55 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.265 03:59:55 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.265 03:59:55 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.265 03:59:55 -- setup/common.sh@32 -- # continue 00:02:41.265 03:59:55 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.265 03:59:55 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.265 03:59:55 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.265 03:59:55 -- setup/common.sh@32 -- # continue 00:02:41.265 03:59:55 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.265 03:59:55 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.265 03:59:55 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.265 03:59:55 -- setup/common.sh@32 -- # continue 00:02:41.265 03:59:55 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.265 03:59:55 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.265 03:59:55 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.265 03:59:55 -- setup/common.sh@32 -- # continue 00:02:41.265 03:59:55 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.265 03:59:55 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.265 03:59:55 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.265 03:59:55 -- setup/common.sh@32 -- # continue 00:02:41.265 03:59:55 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.265 03:59:55 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.265 03:59:55 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.265 03:59:55 -- setup/common.sh@32 -- # continue 00:02:41.265 03:59:55 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.265 03:59:55 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.265 03:59:55 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.265 03:59:55 -- setup/common.sh@32 -- # continue 00:02:41.265 03:59:55 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.265 03:59:55 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.265 03:59:55 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.265 03:59:55 -- setup/common.sh@32 -- # continue 00:02:41.265 03:59:55 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.265 03:59:55 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.265 03:59:55 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.265 03:59:55 -- setup/common.sh@32 -- # continue 00:02:41.265 03:59:55 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.265 03:59:55 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.265 03:59:55 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.265 03:59:55 -- setup/common.sh@32 -- # continue 00:02:41.265 03:59:55 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.265 03:59:55 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.265 03:59:55 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.265 03:59:55 -- setup/common.sh@33 -- # echo 0 00:02:41.265 03:59:55 -- setup/common.sh@33 -- # return 0 00:02:41.265 03:59:55 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:02:41.265 03:59:55 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:02:41.265 03:59:55 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:02:41.265 03:59:55 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:02:41.265 03:59:55 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:41.265 03:59:55 -- setup/common.sh@18 -- # local node=1 00:02:41.265 03:59:55 -- setup/common.sh@19 -- # local var val 00:02:41.265 03:59:55 -- setup/common.sh@20 -- # local mem_f mem 00:02:41.265 03:59:55 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:41.265 03:59:55 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:02:41.265 03:59:55 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:02:41.265 03:59:55 -- setup/common.sh@28 -- # mapfile -t mem 00:02:41.265 03:59:55 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:41.265 03:59:55 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.265 03:59:55 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.265 03:59:55 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126742248 kB' 'MemFree: 122153064 kB' 'MemUsed: 4589184 kB' 'SwapCached: 0 kB' 'Active: 1407804 kB' 'Inactive: 398168 kB' 'Active(anon): 1249864 kB' 'Inactive(anon): 0 kB' 'Active(file): 157940 kB' 'Inactive(file): 398168 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 1600040 kB' 'Mapped: 48112 kB' 'AnonPages: 206200 kB' 'Shmem: 1043932 kB' 'KernelStack: 11016 kB' 'PageTables: 2644 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 98176 kB' 'Slab: 406492 kB' 'SReclaimable: 98176 kB' 'SUnreclaim: 308316 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:02:41.265 03:59:55 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.265 03:59:55 -- setup/common.sh@32 -- # continue 00:02:41.265 03:59:55 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.265 03:59:55 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.265 03:59:55 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.265 03:59:55 -- setup/common.sh@32 -- # continue 00:02:41.265 03:59:55 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.265 03:59:55 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.265 03:59:55 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.265 03:59:55 -- setup/common.sh@32 -- # continue 00:02:41.265 03:59:55 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.265 03:59:55 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.265 03:59:55 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.265 03:59:55 -- setup/common.sh@32 -- # continue 00:02:41.265 03:59:55 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.265 03:59:55 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.265 03:59:55 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.265 03:59:55 -- setup/common.sh@32 -- # continue 00:02:41.265 03:59:55 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.265 03:59:55 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.265 03:59:55 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.265 03:59:55 -- setup/common.sh@32 -- # continue 00:02:41.265 03:59:55 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.265 03:59:55 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.265 03:59:55 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.265 03:59:55 -- setup/common.sh@32 -- # continue 00:02:41.265 03:59:55 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.265 03:59:55 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.265 03:59:55 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.265 03:59:55 -- setup/common.sh@32 -- # continue 00:02:41.265 03:59:55 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.265 03:59:55 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.265 03:59:55 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.265 03:59:55 -- setup/common.sh@32 -- # continue 00:02:41.265 03:59:55 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.265 03:59:55 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.265 03:59:55 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.265 03:59:55 -- setup/common.sh@32 -- # continue 00:02:41.265 03:59:55 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.265 03:59:55 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.265 03:59:55 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.265 03:59:55 -- setup/common.sh@32 -- # continue 00:02:41.265 03:59:55 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.265 03:59:55 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.265 03:59:55 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.265 03:59:55 -- setup/common.sh@32 -- # continue 00:02:41.265 03:59:55 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.265 03:59:55 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.265 03:59:55 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.265 03:59:55 -- setup/common.sh@32 -- # continue 00:02:41.265 03:59:55 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.265 03:59:55 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.265 03:59:55 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.265 03:59:55 -- setup/common.sh@32 -- # continue 00:02:41.265 03:59:55 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.265 03:59:55 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.265 03:59:55 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.265 03:59:55 -- setup/common.sh@32 -- # continue 00:02:41.265 03:59:55 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.265 03:59:55 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.265 03:59:55 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.265 03:59:55 -- setup/common.sh@32 -- # continue 00:02:41.265 03:59:55 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.265 03:59:55 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.265 03:59:55 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.265 03:59:55 -- setup/common.sh@32 -- # continue 00:02:41.265 03:59:55 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.265 03:59:55 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.266 03:59:55 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.266 03:59:55 -- setup/common.sh@32 -- # continue 00:02:41.266 03:59:55 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.266 03:59:55 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.266 03:59:55 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.266 03:59:55 -- setup/common.sh@32 -- # continue 00:02:41.266 03:59:55 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.266 03:59:55 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.266 03:59:55 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.266 03:59:55 -- setup/common.sh@32 -- # continue 00:02:41.266 03:59:55 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.266 03:59:55 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.266 03:59:55 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.266 03:59:55 -- setup/common.sh@32 -- # continue 00:02:41.266 03:59:55 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.266 03:59:55 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.266 03:59:55 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.266 03:59:55 -- setup/common.sh@32 -- # continue 00:02:41.266 03:59:55 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.266 03:59:55 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.266 03:59:55 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.266 03:59:55 -- setup/common.sh@32 -- # continue 00:02:41.266 03:59:55 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.266 03:59:55 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.266 03:59:55 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.266 03:59:55 -- setup/common.sh@32 -- # continue 00:02:41.266 03:59:55 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.266 03:59:55 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.266 03:59:55 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.266 03:59:55 -- setup/common.sh@32 -- # continue 00:02:41.266 03:59:55 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.266 03:59:55 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.266 03:59:55 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.266 03:59:55 -- setup/common.sh@32 -- # continue 00:02:41.266 03:59:55 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.266 03:59:55 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.266 03:59:55 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.266 03:59:55 -- setup/common.sh@32 -- # continue 00:02:41.266 03:59:55 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.266 03:59:55 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.266 03:59:55 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.266 03:59:55 -- setup/common.sh@32 -- # continue 00:02:41.266 03:59:55 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.266 03:59:55 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.266 03:59:55 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.266 03:59:55 -- setup/common.sh@32 -- # continue 00:02:41.266 03:59:55 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.266 03:59:55 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.266 03:59:55 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.266 03:59:55 -- setup/common.sh@32 -- # continue 00:02:41.266 03:59:55 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.266 03:59:55 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.266 03:59:55 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.266 03:59:55 -- setup/common.sh@32 -- # continue 00:02:41.266 03:59:55 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.266 03:59:55 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.266 03:59:55 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.266 03:59:55 -- setup/common.sh@32 -- # continue 00:02:41.266 03:59:55 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.266 03:59:55 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.266 03:59:55 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.266 03:59:55 -- setup/common.sh@32 -- # continue 00:02:41.266 03:59:55 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.266 03:59:55 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.266 03:59:55 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.266 03:59:55 -- setup/common.sh@32 -- # continue 00:02:41.266 03:59:55 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.266 03:59:55 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.266 03:59:55 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.266 03:59:55 -- setup/common.sh@32 -- # continue 00:02:41.266 03:59:55 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.266 03:59:55 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.266 03:59:55 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.266 03:59:55 -- setup/common.sh@32 -- # continue 00:02:41.266 03:59:55 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.266 03:59:55 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.266 03:59:55 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.266 03:59:55 -- setup/common.sh@33 -- # echo 0 00:02:41.266 03:59:55 -- setup/common.sh@33 -- # return 0 00:02:41.266 03:59:55 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:02:41.266 03:59:55 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:02:41.266 03:59:55 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:02:41.266 03:59:55 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:02:41.266 03:59:55 -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:02:41.266 node0=512 expecting 512 00:02:41.266 03:59:55 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:02:41.266 03:59:55 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:02:41.266 03:59:55 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:02:41.266 03:59:55 -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:02:41.266 node1=512 expecting 512 00:02:41.266 03:59:55 -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:02:41.266 00:02:41.266 real 0m2.906s 00:02:41.266 user 0m0.996s 00:02:41.266 sys 0m1.762s 00:02:41.266 03:59:55 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:02:41.266 03:59:55 -- common/autotest_common.sh@10 -- # set +x 00:02:41.266 ************************************ 00:02:41.266 END TEST per_node_1G_alloc 00:02:41.266 ************************************ 00:02:41.266 03:59:55 -- setup/hugepages.sh@212 -- # run_test even_2G_alloc even_2G_alloc 00:02:41.266 03:59:55 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:02:41.266 03:59:55 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:02:41.266 03:59:55 -- common/autotest_common.sh@10 -- # set +x 00:02:41.266 ************************************ 00:02:41.266 START TEST even_2G_alloc 00:02:41.266 ************************************ 00:02:41.266 03:59:55 -- common/autotest_common.sh@1104 -- # even_2G_alloc 00:02:41.266 03:59:55 -- setup/hugepages.sh@152 -- # get_test_nr_hugepages 2097152 00:02:41.266 03:59:55 -- setup/hugepages.sh@49 -- # local size=2097152 00:02:41.266 03:59:55 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:02:41.266 03:59:55 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:02:41.266 03:59:55 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:02:41.266 03:59:55 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:02:41.266 03:59:55 -- setup/hugepages.sh@62 -- # user_nodes=() 00:02:41.266 03:59:55 -- setup/hugepages.sh@62 -- # local user_nodes 00:02:41.266 03:59:55 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:02:41.266 03:59:55 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:02:41.266 03:59:55 -- setup/hugepages.sh@67 -- # nodes_test=() 00:02:41.266 03:59:55 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:02:41.266 03:59:55 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:02:41.266 03:59:55 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:02:41.266 03:59:55 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:02:41.266 03:59:55 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:02:41.266 03:59:55 -- setup/hugepages.sh@83 -- # : 512 00:02:41.266 03:59:55 -- setup/hugepages.sh@84 -- # : 1 00:02:41.266 03:59:55 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:02:41.266 03:59:55 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:02:41.266 03:59:55 -- setup/hugepages.sh@83 -- # : 0 00:02:41.266 03:59:55 -- setup/hugepages.sh@84 -- # : 0 00:02:41.266 03:59:55 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:02:41.266 03:59:55 -- setup/hugepages.sh@153 -- # NRHUGE=1024 00:02:41.266 03:59:55 -- setup/hugepages.sh@153 -- # HUGE_EVEN_ALLOC=yes 00:02:41.266 03:59:55 -- setup/hugepages.sh@153 -- # setup output 00:02:41.266 03:59:55 -- setup/common.sh@9 -- # [[ output == output ]] 00:02:41.266 03:59:55 -- setup/common.sh@10 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/setup.sh 00:02:44.571 0000:74:02.0 (8086 0cfe): Already using the vfio-pci driver 00:02:44.571 0000:c9:00.0 (8086 0a54): Already using the vfio-pci driver 00:02:44.571 0000:f1:02.0 (8086 0cfe): Already using the vfio-pci driver 00:02:44.571 0000:79:02.0 (8086 0cfe): Already using the vfio-pci driver 00:02:44.571 0000:6f:01.0 (8086 0b25): Already using the vfio-pci driver 00:02:44.571 0000:6f:02.0 (8086 0cfe): Already using the vfio-pci driver 00:02:44.571 0000:f6:01.0 (8086 0b25): Already using the vfio-pci driver 00:02:44.572 0000:f6:02.0 (8086 0cfe): Already using the vfio-pci driver 00:02:44.572 0000:74:01.0 (8086 0b25): Already using the vfio-pci driver 00:02:44.572 0000:6a:02.0 (8086 0cfe): Already using the vfio-pci driver 00:02:44.572 0000:79:01.0 (8086 0b25): Already using the vfio-pci driver 00:02:44.572 0000:ec:01.0 (8086 0b25): Already using the vfio-pci driver 00:02:44.572 0000:6a:01.0 (8086 0b25): Already using the vfio-pci driver 00:02:44.572 0000:ec:02.0 (8086 0cfe): Already using the vfio-pci driver 00:02:44.572 0000:ca:00.0 (8086 0a54): Already using the vfio-pci driver 00:02:44.572 0000:e7:01.0 (8086 0b25): Already using the vfio-pci driver 00:02:44.572 0000:e7:02.0 (8086 0cfe): Already using the vfio-pci driver 00:02:44.572 0000:f1:01.0 (8086 0b25): Already using the vfio-pci driver 00:02:44.572 03:59:58 -- setup/hugepages.sh@154 -- # verify_nr_hugepages 00:02:44.572 03:59:58 -- setup/hugepages.sh@89 -- # local node 00:02:44.572 03:59:58 -- setup/hugepages.sh@90 -- # local sorted_t 00:02:44.572 03:59:58 -- setup/hugepages.sh@91 -- # local sorted_s 00:02:44.572 03:59:58 -- setup/hugepages.sh@92 -- # local surp 00:02:44.572 03:59:58 -- setup/hugepages.sh@93 -- # local resv 00:02:44.572 03:59:58 -- setup/hugepages.sh@94 -- # local anon 00:02:44.572 03:59:58 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:02:44.572 03:59:58 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:02:44.572 03:59:58 -- setup/common.sh@17 -- # local get=AnonHugePages 00:02:44.572 03:59:58 -- setup/common.sh@18 -- # local node= 00:02:44.572 03:59:58 -- setup/common.sh@19 -- # local var val 00:02:44.572 03:59:58 -- setup/common.sh@20 -- # local mem_f mem 00:02:44.572 03:59:58 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:44.572 03:59:58 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:44.572 03:59:58 -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:44.572 03:59:58 -- setup/common.sh@28 -- # mapfile -t mem 00:02:44.572 03:59:58 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:44.572 03:59:58 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.572 03:59:58 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.572 03:59:58 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 258558476 kB' 'MemFree: 241721824 kB' 'MemAvailable: 245358668 kB' 'Buffers: 2696 kB' 'Cached: 10893156 kB' 'SwapCached: 0 kB' 'Active: 7030832 kB' 'Inactive: 4390304 kB' 'Active(anon): 6460016 kB' 'Inactive(anon): 0 kB' 'Active(file): 570816 kB' 'Inactive(file): 4390304 kB' 'Unevictable: 9000 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 534196 kB' 'Mapped: 210796 kB' 'Shmem: 5934732 kB' 'KReclaimable: 309552 kB' 'Slab: 954912 kB' 'SReclaimable: 309552 kB' 'SUnreclaim: 645360 kB' 'KernelStack: 24400 kB' 'PageTables: 8520 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 136619264 kB' 'Committed_AS: 8056628 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 329316 kB' 'VmallocChunk: 0 kB' 'Percpu: 104448 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3262528 kB' 'DirectMap2M: 16437248 kB' 'DirectMap1G: 250609664 kB' 00:02:44.572 03:59:58 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:44.572 03:59:58 -- setup/common.sh@32 -- # continue 00:02:44.572 03:59:58 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.572 03:59:58 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.572 03:59:58 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:44.572 03:59:58 -- setup/common.sh@32 -- # continue 00:02:44.572 03:59:58 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.572 03:59:58 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.572 03:59:58 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:44.572 03:59:58 -- setup/common.sh@32 -- # continue 00:02:44.572 03:59:58 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.572 03:59:58 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.572 03:59:58 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:44.572 03:59:58 -- setup/common.sh@32 -- # continue 00:02:44.572 03:59:58 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.572 03:59:58 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.572 03:59:58 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:44.572 03:59:58 -- setup/common.sh@32 -- # continue 00:02:44.572 03:59:58 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.572 03:59:58 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.572 03:59:58 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:44.572 03:59:58 -- setup/common.sh@32 -- # continue 00:02:44.572 03:59:58 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.572 03:59:58 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.572 03:59:58 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:44.572 03:59:58 -- setup/common.sh@32 -- # continue 00:02:44.572 03:59:58 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.572 03:59:58 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.572 03:59:58 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:44.572 03:59:58 -- setup/common.sh@32 -- # continue 00:02:44.572 03:59:58 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.572 03:59:58 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.572 03:59:58 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:44.572 03:59:58 -- setup/common.sh@32 -- # continue 00:02:44.572 03:59:58 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.572 03:59:58 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.572 03:59:58 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:44.572 03:59:58 -- setup/common.sh@32 -- # continue 00:02:44.572 03:59:58 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.572 03:59:58 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.572 03:59:58 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:44.572 03:59:58 -- setup/common.sh@32 -- # continue 00:02:44.572 03:59:58 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.572 03:59:58 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.572 03:59:58 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:44.572 03:59:58 -- setup/common.sh@32 -- # continue 00:02:44.572 03:59:58 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.572 03:59:58 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.572 03:59:58 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:44.572 03:59:58 -- setup/common.sh@32 -- # continue 00:02:44.572 03:59:58 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.572 03:59:58 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.572 03:59:58 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:44.572 03:59:58 -- setup/common.sh@32 -- # continue 00:02:44.572 03:59:58 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.572 03:59:58 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.572 03:59:58 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:44.572 03:59:58 -- setup/common.sh@32 -- # continue 00:02:44.572 03:59:58 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.572 03:59:58 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.572 03:59:58 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:44.572 03:59:58 -- setup/common.sh@32 -- # continue 00:02:44.572 03:59:58 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.572 03:59:58 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.572 03:59:58 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:44.572 03:59:58 -- setup/common.sh@32 -- # continue 00:02:44.572 03:59:58 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.572 03:59:58 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.572 03:59:58 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:44.572 03:59:58 -- setup/common.sh@32 -- # continue 00:02:44.572 03:59:58 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.572 03:59:58 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.572 03:59:58 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:44.572 03:59:58 -- setup/common.sh@32 -- # continue 00:02:44.572 03:59:58 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.572 03:59:58 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.572 03:59:58 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:44.572 03:59:58 -- setup/common.sh@32 -- # continue 00:02:44.572 03:59:58 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.572 03:59:58 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.572 03:59:58 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:44.572 03:59:58 -- setup/common.sh@32 -- # continue 00:02:44.572 03:59:58 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.572 03:59:58 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.572 03:59:58 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:44.572 03:59:58 -- setup/common.sh@32 -- # continue 00:02:44.572 03:59:58 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.572 03:59:58 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.572 03:59:58 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:44.572 03:59:58 -- setup/common.sh@32 -- # continue 00:02:44.573 03:59:58 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.573 03:59:58 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.573 03:59:58 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:44.573 03:59:58 -- setup/common.sh@32 -- # continue 00:02:44.573 03:59:58 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.573 03:59:58 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.573 03:59:58 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:44.573 03:59:58 -- setup/common.sh@32 -- # continue 00:02:44.573 03:59:58 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.573 03:59:58 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.573 03:59:58 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:44.573 03:59:58 -- setup/common.sh@32 -- # continue 00:02:44.573 03:59:58 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.573 03:59:58 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.573 03:59:58 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:44.573 03:59:58 -- setup/common.sh@32 -- # continue 00:02:44.573 03:59:58 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.573 03:59:58 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.573 03:59:58 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:44.573 03:59:58 -- setup/common.sh@32 -- # continue 00:02:44.573 03:59:58 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.573 03:59:58 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.573 03:59:58 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:44.573 03:59:58 -- setup/common.sh@32 -- # continue 00:02:44.573 03:59:58 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.573 03:59:58 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.573 03:59:58 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:44.573 03:59:58 -- setup/common.sh@32 -- # continue 00:02:44.573 03:59:58 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.573 03:59:58 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.573 03:59:58 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:44.573 03:59:58 -- setup/common.sh@32 -- # continue 00:02:44.573 03:59:58 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.573 03:59:58 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.573 03:59:58 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:44.573 03:59:58 -- setup/common.sh@32 -- # continue 00:02:44.573 03:59:58 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.573 03:59:58 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.573 03:59:58 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:44.573 03:59:58 -- setup/common.sh@32 -- # continue 00:02:44.573 03:59:58 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.573 03:59:58 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.573 03:59:58 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:44.573 03:59:58 -- setup/common.sh@32 -- # continue 00:02:44.573 03:59:58 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.573 03:59:58 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.573 03:59:58 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:44.573 03:59:58 -- setup/common.sh@32 -- # continue 00:02:44.573 03:59:58 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.573 03:59:58 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.573 03:59:58 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:44.573 03:59:58 -- setup/common.sh@32 -- # continue 00:02:44.573 03:59:58 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.573 03:59:58 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.573 03:59:58 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:44.573 03:59:58 -- setup/common.sh@32 -- # continue 00:02:44.573 03:59:58 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.573 03:59:58 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.573 03:59:58 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:44.573 03:59:58 -- setup/common.sh@32 -- # continue 00:02:44.573 03:59:58 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.573 03:59:58 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.573 03:59:58 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:44.573 03:59:58 -- setup/common.sh@32 -- # continue 00:02:44.573 03:59:58 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.573 03:59:58 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.573 03:59:58 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:44.573 03:59:58 -- setup/common.sh@32 -- # continue 00:02:44.573 03:59:58 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.573 03:59:58 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.573 03:59:58 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:44.573 03:59:58 -- setup/common.sh@33 -- # echo 0 00:02:44.573 03:59:58 -- setup/common.sh@33 -- # return 0 00:02:44.573 03:59:58 -- setup/hugepages.sh@97 -- # anon=0 00:02:44.573 03:59:58 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:02:44.573 03:59:58 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:44.573 03:59:58 -- setup/common.sh@18 -- # local node= 00:02:44.573 03:59:58 -- setup/common.sh@19 -- # local var val 00:02:44.573 03:59:58 -- setup/common.sh@20 -- # local mem_f mem 00:02:44.573 03:59:58 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:44.573 03:59:58 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:44.573 03:59:58 -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:44.573 03:59:58 -- setup/common.sh@28 -- # mapfile -t mem 00:02:44.573 03:59:58 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:44.573 03:59:58 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.573 03:59:58 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.573 03:59:58 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 258558476 kB' 'MemFree: 241721348 kB' 'MemAvailable: 245358192 kB' 'Buffers: 2696 kB' 'Cached: 10893160 kB' 'SwapCached: 0 kB' 'Active: 7031548 kB' 'Inactive: 4390304 kB' 'Active(anon): 6460732 kB' 'Inactive(anon): 0 kB' 'Active(file): 570816 kB' 'Inactive(file): 4390304 kB' 'Unevictable: 9000 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 534936 kB' 'Mapped: 210796 kB' 'Shmem: 5934736 kB' 'KReclaimable: 309552 kB' 'Slab: 954912 kB' 'SReclaimable: 309552 kB' 'SUnreclaim: 645360 kB' 'KernelStack: 24512 kB' 'PageTables: 8760 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 136619264 kB' 'Committed_AS: 8056636 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 329332 kB' 'VmallocChunk: 0 kB' 'Percpu: 104448 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3262528 kB' 'DirectMap2M: 16437248 kB' 'DirectMap1G: 250609664 kB' 00:02:44.573 03:59:58 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.573 03:59:58 -- setup/common.sh@32 -- # continue 00:02:44.573 03:59:58 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.573 03:59:58 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.573 03:59:58 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.573 03:59:58 -- setup/common.sh@32 -- # continue 00:02:44.573 03:59:58 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.573 03:59:58 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.573 03:59:58 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.573 03:59:58 -- setup/common.sh@32 -- # continue 00:02:44.573 03:59:58 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.573 03:59:58 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.573 03:59:58 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.573 03:59:58 -- setup/common.sh@32 -- # continue 00:02:44.573 03:59:58 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.573 03:59:58 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.573 03:59:58 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.573 03:59:58 -- setup/common.sh@32 -- # continue 00:02:44.573 03:59:58 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.573 03:59:58 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.573 03:59:58 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.573 03:59:58 -- setup/common.sh@32 -- # continue 00:02:44.573 03:59:58 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.573 03:59:58 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.573 03:59:58 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.573 03:59:58 -- setup/common.sh@32 -- # continue 00:02:44.573 03:59:58 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.573 03:59:58 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.573 03:59:58 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.573 03:59:58 -- setup/common.sh@32 -- # continue 00:02:44.573 03:59:58 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.573 03:59:58 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.573 03:59:58 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.573 03:59:58 -- setup/common.sh@32 -- # continue 00:02:44.573 03:59:58 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.573 03:59:58 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.573 03:59:58 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.573 03:59:58 -- setup/common.sh@32 -- # continue 00:02:44.573 03:59:58 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.573 03:59:58 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.574 03:59:58 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.574 03:59:58 -- setup/common.sh@32 -- # continue 00:02:44.574 03:59:58 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.574 03:59:58 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.574 03:59:58 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.574 03:59:58 -- setup/common.sh@32 -- # continue 00:02:44.574 03:59:58 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.574 03:59:58 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.574 03:59:58 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.574 03:59:58 -- setup/common.sh@32 -- # continue 00:02:44.574 03:59:58 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.574 03:59:58 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.574 03:59:58 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.574 03:59:58 -- setup/common.sh@32 -- # continue 00:02:44.574 03:59:58 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.574 03:59:58 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.574 03:59:58 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.574 03:59:58 -- setup/common.sh@32 -- # continue 00:02:44.574 03:59:58 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.574 03:59:58 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.574 03:59:58 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.574 03:59:58 -- setup/common.sh@32 -- # continue 00:02:44.574 03:59:58 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.574 03:59:58 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.574 03:59:58 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.574 03:59:58 -- setup/common.sh@32 -- # continue 00:02:44.574 03:59:58 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.574 03:59:58 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.574 03:59:58 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.574 03:59:58 -- setup/common.sh@32 -- # continue 00:02:44.574 03:59:58 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.574 03:59:58 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.574 03:59:58 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.574 03:59:58 -- setup/common.sh@32 -- # continue 00:02:44.574 03:59:58 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.574 03:59:58 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.574 03:59:58 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.574 03:59:58 -- setup/common.sh@32 -- # continue 00:02:44.574 03:59:58 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.574 03:59:58 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.574 03:59:58 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.574 03:59:58 -- setup/common.sh@32 -- # continue 00:02:44.574 03:59:58 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.574 03:59:58 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.574 03:59:58 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.574 03:59:58 -- setup/common.sh@32 -- # continue 00:02:44.574 03:59:58 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.574 03:59:58 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.574 03:59:58 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.574 03:59:58 -- setup/common.sh@32 -- # continue 00:02:44.574 03:59:58 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.574 03:59:58 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.574 03:59:58 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.574 03:59:58 -- setup/common.sh@32 -- # continue 00:02:44.574 03:59:58 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.574 03:59:58 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.574 03:59:58 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.574 03:59:58 -- setup/common.sh@32 -- # continue 00:02:44.574 03:59:58 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.574 03:59:58 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.574 03:59:58 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.574 03:59:58 -- setup/common.sh@32 -- # continue 00:02:44.574 03:59:58 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.574 03:59:58 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.574 03:59:58 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.574 03:59:58 -- setup/common.sh@32 -- # continue 00:02:44.574 03:59:58 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.574 03:59:58 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.574 03:59:58 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.574 03:59:58 -- setup/common.sh@32 -- # continue 00:02:44.574 03:59:58 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.574 03:59:58 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.574 03:59:58 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.574 03:59:58 -- setup/common.sh@32 -- # continue 00:02:44.574 03:59:58 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.574 03:59:58 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.574 03:59:58 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.574 03:59:58 -- setup/common.sh@32 -- # continue 00:02:44.574 03:59:58 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.574 03:59:58 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.574 03:59:58 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.574 03:59:58 -- setup/common.sh@32 -- # continue 00:02:44.574 03:59:58 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.574 03:59:58 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.574 03:59:58 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.574 03:59:58 -- setup/common.sh@32 -- # continue 00:02:44.574 03:59:58 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.574 03:59:58 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.574 03:59:58 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.574 03:59:58 -- setup/common.sh@32 -- # continue 00:02:44.574 03:59:58 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.574 03:59:58 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.574 03:59:58 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.574 03:59:58 -- setup/common.sh@32 -- # continue 00:02:44.574 03:59:58 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.574 03:59:58 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.574 03:59:58 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.574 03:59:58 -- setup/common.sh@32 -- # continue 00:02:44.574 03:59:58 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.574 03:59:58 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.574 03:59:58 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.574 03:59:58 -- setup/common.sh@32 -- # continue 00:02:44.574 03:59:58 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.574 03:59:58 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.574 03:59:58 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.574 03:59:58 -- setup/common.sh@32 -- # continue 00:02:44.574 03:59:58 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.574 03:59:58 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.574 03:59:58 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.574 03:59:58 -- setup/common.sh@32 -- # continue 00:02:44.574 03:59:58 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.574 03:59:58 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.574 03:59:58 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.574 03:59:58 -- setup/common.sh@32 -- # continue 00:02:44.574 03:59:58 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.574 03:59:58 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.574 03:59:58 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.574 03:59:58 -- setup/common.sh@32 -- # continue 00:02:44.574 03:59:58 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.574 03:59:58 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.574 03:59:58 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.574 03:59:58 -- setup/common.sh@32 -- # continue 00:02:44.574 03:59:58 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.574 03:59:58 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.574 03:59:58 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.574 03:59:58 -- setup/common.sh@32 -- # continue 00:02:44.574 03:59:58 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.574 03:59:58 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.574 03:59:58 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.574 03:59:58 -- setup/common.sh@32 -- # continue 00:02:44.574 03:59:58 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.574 03:59:58 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.574 03:59:58 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.575 03:59:58 -- setup/common.sh@32 -- # continue 00:02:44.575 03:59:58 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.575 03:59:58 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.575 03:59:58 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.575 03:59:58 -- setup/common.sh@32 -- # continue 00:02:44.575 03:59:58 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.575 03:59:58 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.575 03:59:58 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.575 03:59:58 -- setup/common.sh@32 -- # continue 00:02:44.575 03:59:58 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.575 03:59:58 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.575 03:59:58 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.575 03:59:58 -- setup/common.sh@32 -- # continue 00:02:44.575 03:59:58 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.575 03:59:58 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.575 03:59:58 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.575 03:59:58 -- setup/common.sh@32 -- # continue 00:02:44.575 03:59:58 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.575 03:59:58 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.575 03:59:58 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.575 03:59:58 -- setup/common.sh@32 -- # continue 00:02:44.575 03:59:58 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.575 03:59:58 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.575 03:59:58 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.575 03:59:58 -- setup/common.sh@32 -- # continue 00:02:44.575 03:59:58 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.575 03:59:58 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.575 03:59:58 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.575 03:59:58 -- setup/common.sh@32 -- # continue 00:02:44.575 03:59:58 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.575 03:59:58 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.575 03:59:58 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.575 03:59:58 -- setup/common.sh@33 -- # echo 0 00:02:44.575 03:59:58 -- setup/common.sh@33 -- # return 0 00:02:44.575 03:59:58 -- setup/hugepages.sh@99 -- # surp=0 00:02:44.575 03:59:58 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:02:44.575 03:59:58 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:02:44.575 03:59:58 -- setup/common.sh@18 -- # local node= 00:02:44.575 03:59:58 -- setup/common.sh@19 -- # local var val 00:02:44.575 03:59:58 -- setup/common.sh@20 -- # local mem_f mem 00:02:44.575 03:59:58 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:44.575 03:59:58 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:44.575 03:59:58 -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:44.575 03:59:58 -- setup/common.sh@28 -- # mapfile -t mem 00:02:44.575 03:59:58 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:44.575 03:59:58 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.575 03:59:58 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.575 03:59:58 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 258558476 kB' 'MemFree: 241722412 kB' 'MemAvailable: 245359256 kB' 'Buffers: 2696 kB' 'Cached: 10893172 kB' 'SwapCached: 0 kB' 'Active: 7030696 kB' 'Inactive: 4390304 kB' 'Active(anon): 6459880 kB' 'Inactive(anon): 0 kB' 'Active(file): 570816 kB' 'Inactive(file): 4390304 kB' 'Unevictable: 9000 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 534496 kB' 'Mapped: 210720 kB' 'Shmem: 5934748 kB' 'KReclaimable: 309552 kB' 'Slab: 954904 kB' 'SReclaimable: 309552 kB' 'SUnreclaim: 645352 kB' 'KernelStack: 24528 kB' 'PageTables: 8468 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 136619264 kB' 'Committed_AS: 8058160 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 329444 kB' 'VmallocChunk: 0 kB' 'Percpu: 104448 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3262528 kB' 'DirectMap2M: 16437248 kB' 'DirectMap1G: 250609664 kB' 00:02:44.575 03:59:58 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:44.575 03:59:58 -- setup/common.sh@32 -- # continue 00:02:44.575 03:59:58 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.575 03:59:58 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.575 03:59:58 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:44.575 03:59:58 -- setup/common.sh@32 -- # continue 00:02:44.575 03:59:58 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.575 03:59:58 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.575 03:59:58 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:44.575 03:59:58 -- setup/common.sh@32 -- # continue 00:02:44.575 03:59:58 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.575 03:59:58 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.575 03:59:58 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:44.575 03:59:58 -- setup/common.sh@32 -- # continue 00:02:44.575 03:59:58 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.575 03:59:58 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.575 03:59:58 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:44.575 03:59:58 -- setup/common.sh@32 -- # continue 00:02:44.575 03:59:58 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.575 03:59:58 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.575 03:59:58 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:44.575 03:59:58 -- setup/common.sh@32 -- # continue 00:02:44.575 03:59:58 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.575 03:59:58 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.575 03:59:58 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:44.575 03:59:58 -- setup/common.sh@32 -- # continue 00:02:44.575 03:59:58 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.575 03:59:58 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.575 03:59:58 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:44.575 03:59:58 -- setup/common.sh@32 -- # continue 00:02:44.575 03:59:58 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.575 03:59:58 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.575 03:59:58 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:44.575 03:59:58 -- setup/common.sh@32 -- # continue 00:02:44.575 03:59:58 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.575 03:59:58 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.575 03:59:58 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:44.575 03:59:58 -- setup/common.sh@32 -- # continue 00:02:44.575 03:59:58 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.575 03:59:58 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.575 03:59:58 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:44.575 03:59:58 -- setup/common.sh@32 -- # continue 00:02:44.575 03:59:58 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.575 03:59:58 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.575 03:59:58 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:44.575 03:59:58 -- setup/common.sh@32 -- # continue 00:02:44.575 03:59:58 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.575 03:59:58 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.575 03:59:58 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:44.575 03:59:58 -- setup/common.sh@32 -- # continue 00:02:44.575 03:59:58 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.575 03:59:58 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.575 03:59:58 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:44.575 03:59:58 -- setup/common.sh@32 -- # continue 00:02:44.575 03:59:58 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.575 03:59:58 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.575 03:59:58 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:44.575 03:59:58 -- setup/common.sh@32 -- # continue 00:02:44.575 03:59:58 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.575 03:59:58 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.575 03:59:58 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:44.575 03:59:58 -- setup/common.sh@32 -- # continue 00:02:44.575 03:59:58 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.575 03:59:58 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.575 03:59:58 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:44.575 03:59:58 -- setup/common.sh@32 -- # continue 00:02:44.575 03:59:58 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.575 03:59:58 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.575 03:59:58 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:44.575 03:59:58 -- setup/common.sh@32 -- # continue 00:02:44.575 03:59:58 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.575 03:59:58 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.575 03:59:58 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:44.576 03:59:58 -- setup/common.sh@32 -- # continue 00:02:44.576 03:59:58 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.576 03:59:58 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.576 03:59:58 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:44.576 03:59:58 -- setup/common.sh@32 -- # continue 00:02:44.576 03:59:58 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.576 03:59:58 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.576 03:59:58 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:44.576 03:59:58 -- setup/common.sh@32 -- # continue 00:02:44.576 03:59:58 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.576 03:59:58 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.576 03:59:58 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:44.576 03:59:58 -- setup/common.sh@32 -- # continue 00:02:44.576 03:59:58 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.576 03:59:58 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.576 03:59:58 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:44.576 03:59:58 -- setup/common.sh@32 -- # continue 00:02:44.576 03:59:58 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.576 03:59:58 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.576 03:59:58 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:44.576 03:59:58 -- setup/common.sh@32 -- # continue 00:02:44.576 03:59:58 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.576 03:59:58 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.576 03:59:58 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:44.576 03:59:58 -- setup/common.sh@32 -- # continue 00:02:44.576 03:59:58 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.576 03:59:58 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.576 03:59:58 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:44.576 03:59:58 -- setup/common.sh@32 -- # continue 00:02:44.576 03:59:58 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.576 03:59:58 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.576 03:59:58 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:44.576 03:59:58 -- setup/common.sh@32 -- # continue 00:02:44.576 03:59:58 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.576 03:59:58 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.576 03:59:58 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:44.576 03:59:58 -- setup/common.sh@32 -- # continue 00:02:44.576 03:59:58 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.576 03:59:58 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.576 03:59:58 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:44.576 03:59:58 -- setup/common.sh@32 -- # continue 00:02:44.576 03:59:58 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.576 03:59:58 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.576 03:59:58 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:44.576 03:59:58 -- setup/common.sh@32 -- # continue 00:02:44.576 03:59:58 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.576 03:59:58 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.576 03:59:58 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:44.576 03:59:58 -- setup/common.sh@32 -- # continue 00:02:44.576 03:59:58 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.576 03:59:58 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.576 03:59:58 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:44.576 03:59:58 -- setup/common.sh@32 -- # continue 00:02:44.576 03:59:58 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.576 03:59:58 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.576 03:59:58 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:44.576 03:59:58 -- setup/common.sh@32 -- # continue 00:02:44.576 03:59:58 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.576 03:59:58 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.576 03:59:58 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:44.576 03:59:58 -- setup/common.sh@32 -- # continue 00:02:44.576 03:59:58 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.576 03:59:58 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.576 03:59:58 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:44.576 03:59:58 -- setup/common.sh@32 -- # continue 00:02:44.576 03:59:58 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.576 03:59:58 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.576 03:59:58 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:44.576 03:59:58 -- setup/common.sh@32 -- # continue 00:02:44.576 03:59:58 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.576 03:59:58 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.576 03:59:58 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:44.576 03:59:58 -- setup/common.sh@32 -- # continue 00:02:44.576 03:59:58 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.576 03:59:58 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.576 03:59:58 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:44.576 03:59:58 -- setup/common.sh@32 -- # continue 00:02:44.576 03:59:58 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.576 03:59:58 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.576 03:59:58 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:44.576 03:59:58 -- setup/common.sh@32 -- # continue 00:02:44.576 03:59:58 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.576 03:59:58 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.576 03:59:58 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:44.576 03:59:58 -- setup/common.sh@32 -- # continue 00:02:44.576 03:59:58 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.576 03:59:58 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.576 03:59:58 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:44.576 03:59:58 -- setup/common.sh@32 -- # continue 00:02:44.576 03:59:58 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.576 03:59:58 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.576 03:59:58 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:44.576 03:59:58 -- setup/common.sh@32 -- # continue 00:02:44.576 03:59:58 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.576 03:59:58 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.576 03:59:58 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:44.576 03:59:58 -- setup/common.sh@32 -- # continue 00:02:44.576 03:59:58 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.576 03:59:58 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.576 03:59:58 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:44.576 03:59:58 -- setup/common.sh@32 -- # continue 00:02:44.576 03:59:58 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.576 03:59:58 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.576 03:59:58 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:44.576 03:59:58 -- setup/common.sh@32 -- # continue 00:02:44.576 03:59:58 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.576 03:59:58 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.576 03:59:58 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:44.576 03:59:58 -- setup/common.sh@32 -- # continue 00:02:44.576 03:59:58 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.576 03:59:58 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.576 03:59:58 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:44.576 03:59:58 -- setup/common.sh@32 -- # continue 00:02:44.576 03:59:58 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.576 03:59:58 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.576 03:59:58 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:44.576 03:59:58 -- setup/common.sh@32 -- # continue 00:02:44.576 03:59:58 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.576 03:59:58 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.576 03:59:58 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:44.576 03:59:58 -- setup/common.sh@32 -- # continue 00:02:44.576 03:59:58 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.576 03:59:58 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.576 03:59:58 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:44.576 03:59:58 -- setup/common.sh@32 -- # continue 00:02:44.576 03:59:58 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.576 03:59:58 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.576 03:59:58 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:44.576 03:59:58 -- setup/common.sh@33 -- # echo 0 00:02:44.576 03:59:58 -- setup/common.sh@33 -- # return 0 00:02:44.576 03:59:58 -- setup/hugepages.sh@100 -- # resv=0 00:02:44.576 03:59:58 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:02:44.576 nr_hugepages=1024 00:02:44.576 03:59:58 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:02:44.576 resv_hugepages=0 00:02:44.576 03:59:58 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:02:44.576 surplus_hugepages=0 00:02:44.576 03:59:58 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:02:44.576 anon_hugepages=0 00:02:44.576 03:59:58 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:02:44.576 03:59:58 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:02:44.576 03:59:58 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:02:44.576 03:59:58 -- setup/common.sh@17 -- # local get=HugePages_Total 00:02:44.576 03:59:58 -- setup/common.sh@18 -- # local node= 00:02:44.576 03:59:58 -- setup/common.sh@19 -- # local var val 00:02:44.576 03:59:58 -- setup/common.sh@20 -- # local mem_f mem 00:02:44.576 03:59:58 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:44.576 03:59:58 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:44.576 03:59:58 -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:44.576 03:59:58 -- setup/common.sh@28 -- # mapfile -t mem 00:02:44.576 03:59:58 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:44.576 03:59:58 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.576 03:59:58 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.577 03:59:58 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 258558476 kB' 'MemFree: 241722304 kB' 'MemAvailable: 245359148 kB' 'Buffers: 2696 kB' 'Cached: 10893188 kB' 'SwapCached: 0 kB' 'Active: 7030780 kB' 'Inactive: 4390304 kB' 'Active(anon): 6459964 kB' 'Inactive(anon): 0 kB' 'Active(file): 570816 kB' 'Inactive(file): 4390304 kB' 'Unevictable: 9000 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 534564 kB' 'Mapped: 210720 kB' 'Shmem: 5934764 kB' 'KReclaimable: 309552 kB' 'Slab: 954904 kB' 'SReclaimable: 309552 kB' 'SUnreclaim: 645352 kB' 'KernelStack: 24560 kB' 'PageTables: 8724 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 136619264 kB' 'Committed_AS: 8058176 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 329492 kB' 'VmallocChunk: 0 kB' 'Percpu: 104448 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3262528 kB' 'DirectMap2M: 16437248 kB' 'DirectMap1G: 250609664 kB' 00:02:44.577 03:59:58 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:44.577 03:59:58 -- setup/common.sh@32 -- # continue 00:02:44.577 03:59:58 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.577 03:59:58 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.577 03:59:58 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:44.577 03:59:58 -- setup/common.sh@32 -- # continue 00:02:44.577 03:59:58 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.577 03:59:58 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.577 03:59:58 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:44.577 03:59:58 -- setup/common.sh@32 -- # continue 00:02:44.577 03:59:58 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.577 03:59:58 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.577 03:59:58 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:44.577 03:59:58 -- setup/common.sh@32 -- # continue 00:02:44.577 03:59:58 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.577 03:59:58 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.577 03:59:58 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:44.577 03:59:58 -- setup/common.sh@32 -- # continue 00:02:44.577 03:59:58 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.577 03:59:58 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.577 03:59:58 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:44.577 03:59:58 -- setup/common.sh@32 -- # continue 00:02:44.577 03:59:58 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.577 03:59:58 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.577 03:59:58 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:44.577 03:59:58 -- setup/common.sh@32 -- # continue 00:02:44.577 03:59:58 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.577 03:59:58 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.577 03:59:58 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:44.577 03:59:58 -- setup/common.sh@32 -- # continue 00:02:44.577 03:59:58 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.577 03:59:58 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.577 03:59:58 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:44.577 03:59:58 -- setup/common.sh@32 -- # continue 00:02:44.577 03:59:58 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.577 03:59:58 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.577 03:59:58 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:44.577 03:59:58 -- setup/common.sh@32 -- # continue 00:02:44.577 03:59:58 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.577 03:59:58 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.577 03:59:58 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:44.577 03:59:58 -- setup/common.sh@32 -- # continue 00:02:44.577 03:59:58 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.577 03:59:58 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.577 03:59:58 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:44.577 03:59:58 -- setup/common.sh@32 -- # continue 00:02:44.577 03:59:58 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.577 03:59:58 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.577 03:59:58 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:44.577 03:59:58 -- setup/common.sh@32 -- # continue 00:02:44.577 03:59:58 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.577 03:59:58 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.577 03:59:58 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:44.577 03:59:58 -- setup/common.sh@32 -- # continue 00:02:44.577 03:59:58 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.577 03:59:58 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.577 03:59:58 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:44.577 03:59:58 -- setup/common.sh@32 -- # continue 00:02:44.577 03:59:58 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.577 03:59:58 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.577 03:59:58 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:44.577 03:59:58 -- setup/common.sh@32 -- # continue 00:02:44.577 03:59:58 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.577 03:59:58 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.577 03:59:58 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:44.577 03:59:58 -- setup/common.sh@32 -- # continue 00:02:44.577 03:59:58 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.577 03:59:58 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.577 03:59:58 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:44.577 03:59:58 -- setup/common.sh@32 -- # continue 00:02:44.577 03:59:58 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.577 03:59:58 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.577 03:59:58 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:44.577 03:59:58 -- setup/common.sh@32 -- # continue 00:02:44.577 03:59:58 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.577 03:59:58 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.577 03:59:58 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:44.577 03:59:58 -- setup/common.sh@32 -- # continue 00:02:44.577 03:59:58 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.577 03:59:58 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.577 03:59:58 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:44.577 03:59:58 -- setup/common.sh@32 -- # continue 00:02:44.577 03:59:58 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.577 03:59:58 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.577 03:59:58 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:44.577 03:59:58 -- setup/common.sh@32 -- # continue 00:02:44.577 03:59:58 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.577 03:59:58 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.577 03:59:58 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:44.577 03:59:58 -- setup/common.sh@32 -- # continue 00:02:44.577 03:59:58 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.577 03:59:58 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.577 03:59:58 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:44.577 03:59:58 -- setup/common.sh@32 -- # continue 00:02:44.578 03:59:58 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.578 03:59:58 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.578 03:59:58 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:44.578 03:59:58 -- setup/common.sh@32 -- # continue 00:02:44.578 03:59:58 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.578 03:59:58 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.578 03:59:58 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:44.578 03:59:58 -- setup/common.sh@32 -- # continue 00:02:44.578 03:59:58 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.578 03:59:58 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.578 03:59:58 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:44.578 03:59:58 -- setup/common.sh@32 -- # continue 00:02:44.578 03:59:58 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.578 03:59:58 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.578 03:59:58 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:44.578 03:59:58 -- setup/common.sh@32 -- # continue 00:02:44.578 03:59:58 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.578 03:59:58 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.578 03:59:58 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:44.578 03:59:58 -- setup/common.sh@32 -- # continue 00:02:44.578 03:59:58 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.578 03:59:58 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.578 03:59:58 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:44.578 03:59:58 -- setup/common.sh@32 -- # continue 00:02:44.578 03:59:58 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.578 03:59:58 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.578 03:59:58 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:44.578 03:59:58 -- setup/common.sh@32 -- # continue 00:02:44.578 03:59:58 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.578 03:59:58 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.578 03:59:58 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:44.578 03:59:58 -- setup/common.sh@32 -- # continue 00:02:44.578 03:59:58 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.578 03:59:58 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.578 03:59:58 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:44.578 03:59:58 -- setup/common.sh@32 -- # continue 00:02:44.578 03:59:58 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.578 03:59:58 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.578 03:59:58 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:44.578 03:59:58 -- setup/common.sh@32 -- # continue 00:02:44.578 03:59:58 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.578 03:59:58 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.578 03:59:58 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:44.578 03:59:58 -- setup/common.sh@32 -- # continue 00:02:44.578 03:59:58 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.578 03:59:58 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.578 03:59:58 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:44.578 03:59:58 -- setup/common.sh@32 -- # continue 00:02:44.578 03:59:58 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.578 03:59:58 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.578 03:59:58 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:44.578 03:59:58 -- setup/common.sh@32 -- # continue 00:02:44.578 03:59:58 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.578 03:59:58 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.578 03:59:58 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:44.578 03:59:58 -- setup/common.sh@32 -- # continue 00:02:44.578 03:59:58 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.578 03:59:58 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.578 03:59:58 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:44.578 03:59:58 -- setup/common.sh@32 -- # continue 00:02:44.578 03:59:58 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.578 03:59:58 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.578 03:59:58 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:44.578 03:59:58 -- setup/common.sh@32 -- # continue 00:02:44.578 03:59:58 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.578 03:59:58 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.578 03:59:58 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:44.578 03:59:58 -- setup/common.sh@32 -- # continue 00:02:44.578 03:59:58 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.578 03:59:58 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.578 03:59:58 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:44.578 03:59:58 -- setup/common.sh@32 -- # continue 00:02:44.578 03:59:58 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.578 03:59:58 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.578 03:59:58 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:44.578 03:59:58 -- setup/common.sh@32 -- # continue 00:02:44.578 03:59:58 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.578 03:59:58 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.578 03:59:58 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:44.578 03:59:58 -- setup/common.sh@32 -- # continue 00:02:44.578 03:59:58 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.578 03:59:58 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.578 03:59:58 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:44.578 03:59:58 -- setup/common.sh@32 -- # continue 00:02:44.578 03:59:58 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.578 03:59:58 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.578 03:59:58 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:44.578 03:59:58 -- setup/common.sh@32 -- # continue 00:02:44.578 03:59:58 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.578 03:59:58 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.578 03:59:58 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:44.578 03:59:58 -- setup/common.sh@32 -- # continue 00:02:44.578 03:59:58 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.578 03:59:58 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.578 03:59:58 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:44.578 03:59:58 -- setup/common.sh@32 -- # continue 00:02:44.578 03:59:58 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.578 03:59:58 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.578 03:59:58 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:44.578 03:59:58 -- setup/common.sh@33 -- # echo 1024 00:02:44.578 03:59:58 -- setup/common.sh@33 -- # return 0 00:02:44.578 03:59:58 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:02:44.578 03:59:58 -- setup/hugepages.sh@112 -- # get_nodes 00:02:44.578 03:59:58 -- setup/hugepages.sh@27 -- # local node 00:02:44.578 03:59:58 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:44.578 03:59:58 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:02:44.578 03:59:58 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:44.578 03:59:58 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:02:44.578 03:59:58 -- setup/hugepages.sh@32 -- # no_nodes=2 00:02:44.578 03:59:58 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:02:44.578 03:59:58 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:02:44.578 03:59:58 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:02:44.578 03:59:58 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:02:44.578 03:59:58 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:44.578 03:59:58 -- setup/common.sh@18 -- # local node=0 00:02:44.578 03:59:58 -- setup/common.sh@19 -- # local var val 00:02:44.578 03:59:58 -- setup/common.sh@20 -- # local mem_f mem 00:02:44.578 03:59:58 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:44.578 03:59:58 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:02:44.578 03:59:58 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:02:44.578 03:59:58 -- setup/common.sh@28 -- # mapfile -t mem 00:02:44.578 03:59:58 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:44.578 03:59:58 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.578 03:59:58 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.578 03:59:58 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 131816228 kB' 'MemFree: 119578128 kB' 'MemUsed: 12238100 kB' 'SwapCached: 0 kB' 'Active: 5624392 kB' 'Inactive: 3992136 kB' 'Active(anon): 5211516 kB' 'Inactive(anon): 0 kB' 'Active(file): 412876 kB' 'Inactive(file): 3992136 kB' 'Unevictable: 9000 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 9295808 kB' 'Mapped: 162584 kB' 'AnonPages: 329892 kB' 'Shmem: 4890796 kB' 'KernelStack: 13560 kB' 'PageTables: 5972 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 211376 kB' 'Slab: 548772 kB' 'SReclaimable: 211376 kB' 'SUnreclaim: 337396 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:02:44.578 03:59:58 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.578 03:59:58 -- setup/common.sh@32 -- # continue 00:02:44.578 03:59:58 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.578 03:59:58 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.578 03:59:58 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.578 03:59:58 -- setup/common.sh@32 -- # continue 00:02:44.578 03:59:58 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.578 03:59:58 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.578 03:59:58 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.578 03:59:58 -- setup/common.sh@32 -- # continue 00:02:44.579 03:59:58 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.579 03:59:58 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.579 03:59:58 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.579 03:59:58 -- setup/common.sh@32 -- # continue 00:02:44.579 03:59:58 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.579 03:59:58 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.579 03:59:58 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.579 03:59:58 -- setup/common.sh@32 -- # continue 00:02:44.579 03:59:58 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.579 03:59:58 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.579 03:59:58 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.579 03:59:58 -- setup/common.sh@32 -- # continue 00:02:44.579 03:59:58 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.579 03:59:58 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.579 03:59:58 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.579 03:59:58 -- setup/common.sh@32 -- # continue 00:02:44.579 03:59:58 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.579 03:59:58 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.579 03:59:58 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.579 03:59:58 -- setup/common.sh@32 -- # continue 00:02:44.579 03:59:58 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.579 03:59:58 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.579 03:59:58 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.579 03:59:58 -- setup/common.sh@32 -- # continue 00:02:44.579 03:59:58 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.579 03:59:58 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.579 03:59:58 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.579 03:59:58 -- setup/common.sh@32 -- # continue 00:02:44.579 03:59:58 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.579 03:59:58 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.579 03:59:58 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.579 03:59:58 -- setup/common.sh@32 -- # continue 00:02:44.579 03:59:58 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.579 03:59:58 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.579 03:59:58 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.579 03:59:58 -- setup/common.sh@32 -- # continue 00:02:44.579 03:59:58 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.579 03:59:58 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.579 03:59:58 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.579 03:59:58 -- setup/common.sh@32 -- # continue 00:02:44.579 03:59:58 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.579 03:59:58 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.579 03:59:58 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.579 03:59:58 -- setup/common.sh@32 -- # continue 00:02:44.579 03:59:58 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.579 03:59:58 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.579 03:59:58 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.579 03:59:58 -- setup/common.sh@32 -- # continue 00:02:44.579 03:59:58 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.579 03:59:58 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.579 03:59:58 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.579 03:59:58 -- setup/common.sh@32 -- # continue 00:02:44.579 03:59:58 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.579 03:59:58 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.579 03:59:58 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.579 03:59:58 -- setup/common.sh@32 -- # continue 00:02:44.579 03:59:58 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.579 03:59:58 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.579 03:59:58 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.579 03:59:58 -- setup/common.sh@32 -- # continue 00:02:44.579 03:59:58 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.579 03:59:58 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.579 03:59:58 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.579 03:59:58 -- setup/common.sh@32 -- # continue 00:02:44.579 03:59:58 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.579 03:59:58 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.579 03:59:58 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.579 03:59:58 -- setup/common.sh@32 -- # continue 00:02:44.579 03:59:58 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.579 03:59:58 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.579 03:59:58 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.579 03:59:58 -- setup/common.sh@32 -- # continue 00:02:44.579 03:59:58 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.579 03:59:58 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.579 03:59:58 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.579 03:59:58 -- setup/common.sh@32 -- # continue 00:02:44.579 03:59:58 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.579 03:59:58 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.579 03:59:58 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.579 03:59:58 -- setup/common.sh@32 -- # continue 00:02:44.579 03:59:58 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.579 03:59:58 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.579 03:59:58 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.579 03:59:58 -- setup/common.sh@32 -- # continue 00:02:44.579 03:59:58 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.579 03:59:58 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.579 03:59:58 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.579 03:59:58 -- setup/common.sh@32 -- # continue 00:02:44.579 03:59:58 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.579 03:59:58 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.579 03:59:58 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.579 03:59:58 -- setup/common.sh@32 -- # continue 00:02:44.579 03:59:58 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.579 03:59:58 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.579 03:59:58 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.579 03:59:58 -- setup/common.sh@32 -- # continue 00:02:44.579 03:59:58 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.579 03:59:58 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.579 03:59:58 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.579 03:59:58 -- setup/common.sh@32 -- # continue 00:02:44.579 03:59:58 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.579 03:59:58 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.579 03:59:58 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.579 03:59:58 -- setup/common.sh@32 -- # continue 00:02:44.579 03:59:58 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.579 03:59:58 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.579 03:59:58 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.579 03:59:58 -- setup/common.sh@32 -- # continue 00:02:44.579 03:59:58 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.579 03:59:58 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.579 03:59:58 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.579 03:59:58 -- setup/common.sh@32 -- # continue 00:02:44.579 03:59:58 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.579 03:59:58 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.579 03:59:58 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.579 03:59:58 -- setup/common.sh@32 -- # continue 00:02:44.579 03:59:58 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.579 03:59:58 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.579 03:59:58 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.579 03:59:58 -- setup/common.sh@32 -- # continue 00:02:44.579 03:59:58 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.579 03:59:58 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.579 03:59:58 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.579 03:59:58 -- setup/common.sh@32 -- # continue 00:02:44.579 03:59:58 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.579 03:59:58 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.579 03:59:58 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.579 03:59:58 -- setup/common.sh@32 -- # continue 00:02:44.579 03:59:58 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.579 03:59:58 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.579 03:59:58 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.579 03:59:58 -- setup/common.sh@32 -- # continue 00:02:44.579 03:59:58 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.579 03:59:58 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.579 03:59:58 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.579 03:59:58 -- setup/common.sh@33 -- # echo 0 00:02:44.579 03:59:58 -- setup/common.sh@33 -- # return 0 00:02:44.579 03:59:58 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:02:44.579 03:59:58 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:02:44.579 03:59:58 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:02:44.579 03:59:58 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:02:44.579 03:59:58 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:44.579 03:59:58 -- setup/common.sh@18 -- # local node=1 00:02:44.579 03:59:58 -- setup/common.sh@19 -- # local var val 00:02:44.579 03:59:58 -- setup/common.sh@20 -- # local mem_f mem 00:02:44.579 03:59:58 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:44.579 03:59:58 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:02:44.579 03:59:58 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:02:44.579 03:59:58 -- setup/common.sh@28 -- # mapfile -t mem 00:02:44.579 03:59:58 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:44.579 03:59:58 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.579 03:59:58 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.580 03:59:58 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126742248 kB' 'MemFree: 122142348 kB' 'MemUsed: 4599900 kB' 'SwapCached: 0 kB' 'Active: 1406448 kB' 'Inactive: 398168 kB' 'Active(anon): 1248508 kB' 'Inactive(anon): 0 kB' 'Active(file): 157940 kB' 'Inactive(file): 398168 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 1600088 kB' 'Mapped: 48128 kB' 'AnonPages: 204644 kB' 'Shmem: 1043980 kB' 'KernelStack: 10968 kB' 'PageTables: 2716 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 98176 kB' 'Slab: 406132 kB' 'SReclaimable: 98176 kB' 'SUnreclaim: 307956 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:02:44.580 03:59:58 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.580 03:59:58 -- setup/common.sh@32 -- # continue 00:02:44.580 03:59:58 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.580 03:59:58 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.580 03:59:58 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.580 03:59:58 -- setup/common.sh@32 -- # continue 00:02:44.580 03:59:58 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.580 03:59:58 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.580 03:59:58 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.580 03:59:58 -- setup/common.sh@32 -- # continue 00:02:44.580 03:59:58 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.580 03:59:58 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.580 03:59:58 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.580 03:59:58 -- setup/common.sh@32 -- # continue 00:02:44.580 03:59:58 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.580 03:59:58 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.580 03:59:58 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.580 03:59:58 -- setup/common.sh@32 -- # continue 00:02:44.580 03:59:58 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.580 03:59:58 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.580 03:59:58 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.580 03:59:58 -- setup/common.sh@32 -- # continue 00:02:44.580 03:59:58 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.580 03:59:58 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.580 03:59:58 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.580 03:59:58 -- setup/common.sh@32 -- # continue 00:02:44.580 03:59:58 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.580 03:59:58 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.580 03:59:58 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.580 03:59:58 -- setup/common.sh@32 -- # continue 00:02:44.580 03:59:58 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.580 03:59:58 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.580 03:59:58 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.580 03:59:58 -- setup/common.sh@32 -- # continue 00:02:44.580 03:59:58 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.580 03:59:58 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.580 03:59:58 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.580 03:59:58 -- setup/common.sh@32 -- # continue 00:02:44.580 03:59:58 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.580 03:59:58 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.580 03:59:58 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.580 03:59:58 -- setup/common.sh@32 -- # continue 00:02:44.580 03:59:58 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.580 03:59:58 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.580 03:59:58 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.580 03:59:58 -- setup/common.sh@32 -- # continue 00:02:44.580 03:59:58 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.580 03:59:58 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.580 03:59:58 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.580 03:59:58 -- setup/common.sh@32 -- # continue 00:02:44.580 03:59:58 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.580 03:59:58 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.580 03:59:58 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.580 03:59:58 -- setup/common.sh@32 -- # continue 00:02:44.580 03:59:58 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.580 03:59:58 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.580 03:59:58 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.580 03:59:58 -- setup/common.sh@32 -- # continue 00:02:44.580 03:59:58 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.580 03:59:58 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.580 03:59:58 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.580 03:59:58 -- setup/common.sh@32 -- # continue 00:02:44.580 03:59:58 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.580 03:59:58 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.580 03:59:58 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.580 03:59:58 -- setup/common.sh@32 -- # continue 00:02:44.580 03:59:58 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.580 03:59:58 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.580 03:59:58 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.580 03:59:58 -- setup/common.sh@32 -- # continue 00:02:44.580 03:59:58 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.580 03:59:58 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.580 03:59:58 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.580 03:59:58 -- setup/common.sh@32 -- # continue 00:02:44.580 03:59:58 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.580 03:59:58 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.580 03:59:58 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.580 03:59:58 -- setup/common.sh@32 -- # continue 00:02:44.580 03:59:58 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.580 03:59:58 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.580 03:59:58 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.580 03:59:58 -- setup/common.sh@32 -- # continue 00:02:44.580 03:59:58 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.580 03:59:58 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.580 03:59:58 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.580 03:59:58 -- setup/common.sh@32 -- # continue 00:02:44.580 03:59:58 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.580 03:59:58 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.580 03:59:58 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.580 03:59:58 -- setup/common.sh@32 -- # continue 00:02:44.580 03:59:58 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.580 03:59:58 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.580 03:59:58 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.580 03:59:58 -- setup/common.sh@32 -- # continue 00:02:44.580 03:59:58 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.580 03:59:58 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.580 03:59:58 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.580 03:59:58 -- setup/common.sh@32 -- # continue 00:02:44.580 03:59:58 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.580 03:59:58 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.580 03:59:58 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.580 03:59:58 -- setup/common.sh@32 -- # continue 00:02:44.580 03:59:58 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.580 03:59:58 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.580 03:59:58 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.580 03:59:58 -- setup/common.sh@32 -- # continue 00:02:44.580 03:59:58 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.580 03:59:58 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.580 03:59:58 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.580 03:59:58 -- setup/common.sh@32 -- # continue 00:02:44.580 03:59:58 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.580 03:59:58 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.580 03:59:58 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.580 03:59:58 -- setup/common.sh@32 -- # continue 00:02:44.580 03:59:58 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.580 03:59:58 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.580 03:59:58 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.580 03:59:58 -- setup/common.sh@32 -- # continue 00:02:44.580 03:59:58 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.580 03:59:58 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.580 03:59:58 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.580 03:59:58 -- setup/common.sh@32 -- # continue 00:02:44.580 03:59:58 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.580 03:59:58 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.580 03:59:58 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.580 03:59:58 -- setup/common.sh@32 -- # continue 00:02:44.580 03:59:58 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.580 03:59:58 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.580 03:59:58 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.580 03:59:58 -- setup/common.sh@32 -- # continue 00:02:44.580 03:59:58 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.580 03:59:58 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.580 03:59:58 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.580 03:59:58 -- setup/common.sh@32 -- # continue 00:02:44.580 03:59:58 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.580 03:59:58 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.580 03:59:58 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.580 03:59:58 -- setup/common.sh@32 -- # continue 00:02:44.580 03:59:58 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.580 03:59:58 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.580 03:59:58 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.581 03:59:58 -- setup/common.sh@32 -- # continue 00:02:44.581 03:59:58 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.581 03:59:58 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.581 03:59:58 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.581 03:59:58 -- setup/common.sh@33 -- # echo 0 00:02:44.581 03:59:58 -- setup/common.sh@33 -- # return 0 00:02:44.581 03:59:58 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:02:44.581 03:59:58 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:02:44.581 03:59:58 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:02:44.581 03:59:58 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:02:44.581 03:59:58 -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:02:44.581 node0=512 expecting 512 00:02:44.581 03:59:58 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:02:44.581 03:59:58 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:02:44.581 03:59:58 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:02:44.581 03:59:58 -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:02:44.581 node1=512 expecting 512 00:02:44.581 03:59:58 -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:02:44.581 00:02:44.581 real 0m3.143s 00:02:44.581 user 0m1.069s 00:02:44.581 sys 0m1.944s 00:02:44.581 03:59:58 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:02:44.581 03:59:58 -- common/autotest_common.sh@10 -- # set +x 00:02:44.581 ************************************ 00:02:44.581 END TEST even_2G_alloc 00:02:44.581 ************************************ 00:02:44.581 03:59:58 -- setup/hugepages.sh@213 -- # run_test odd_alloc odd_alloc 00:02:44.581 03:59:58 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:02:44.581 03:59:58 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:02:44.581 03:59:58 -- common/autotest_common.sh@10 -- # set +x 00:02:44.581 ************************************ 00:02:44.581 START TEST odd_alloc 00:02:44.581 ************************************ 00:02:44.581 03:59:58 -- common/autotest_common.sh@1104 -- # odd_alloc 00:02:44.581 03:59:58 -- setup/hugepages.sh@159 -- # get_test_nr_hugepages 2098176 00:02:44.581 03:59:58 -- setup/hugepages.sh@49 -- # local size=2098176 00:02:44.581 03:59:58 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:02:44.581 03:59:58 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:02:44.581 03:59:58 -- setup/hugepages.sh@57 -- # nr_hugepages=1025 00:02:44.581 03:59:58 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:02:44.581 03:59:58 -- setup/hugepages.sh@62 -- # user_nodes=() 00:02:44.581 03:59:58 -- setup/hugepages.sh@62 -- # local user_nodes 00:02:44.581 03:59:58 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1025 00:02:44.581 03:59:58 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:02:44.581 03:59:58 -- setup/hugepages.sh@67 -- # nodes_test=() 00:02:44.581 03:59:58 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:02:44.581 03:59:58 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:02:44.581 03:59:58 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:02:44.581 03:59:58 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:02:44.581 03:59:58 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:02:44.581 03:59:58 -- setup/hugepages.sh@83 -- # : 513 00:02:44.581 03:59:58 -- setup/hugepages.sh@84 -- # : 1 00:02:44.581 03:59:58 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:02:44.581 03:59:58 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=513 00:02:44.581 03:59:58 -- setup/hugepages.sh@83 -- # : 0 00:02:44.581 03:59:58 -- setup/hugepages.sh@84 -- # : 0 00:02:44.581 03:59:58 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:02:44.581 03:59:58 -- setup/hugepages.sh@160 -- # HUGEMEM=2049 00:02:44.581 03:59:58 -- setup/hugepages.sh@160 -- # HUGE_EVEN_ALLOC=yes 00:02:44.581 03:59:58 -- setup/hugepages.sh@160 -- # setup output 00:02:44.581 03:59:58 -- setup/common.sh@9 -- # [[ output == output ]] 00:02:44.581 03:59:58 -- setup/common.sh@10 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/setup.sh 00:02:47.886 0000:74:02.0 (8086 0cfe): Already using the vfio-pci driver 00:02:47.886 0000:c9:00.0 (8086 0a54): Already using the vfio-pci driver 00:02:47.886 0000:f1:02.0 (8086 0cfe): Already using the vfio-pci driver 00:02:47.886 0000:79:02.0 (8086 0cfe): Already using the vfio-pci driver 00:02:47.886 0000:6f:01.0 (8086 0b25): Already using the vfio-pci driver 00:02:47.886 0000:6f:02.0 (8086 0cfe): Already using the vfio-pci driver 00:02:47.886 0000:f6:01.0 (8086 0b25): Already using the vfio-pci driver 00:02:47.886 0000:f6:02.0 (8086 0cfe): Already using the vfio-pci driver 00:02:47.886 0000:74:01.0 (8086 0b25): Already using the vfio-pci driver 00:02:47.886 0000:6a:02.0 (8086 0cfe): Already using the vfio-pci driver 00:02:47.886 0000:79:01.0 (8086 0b25): Already using the vfio-pci driver 00:02:47.886 0000:ec:01.0 (8086 0b25): Already using the vfio-pci driver 00:02:47.886 0000:6a:01.0 (8086 0b25): Already using the vfio-pci driver 00:02:47.886 0000:ec:02.0 (8086 0cfe): Already using the vfio-pci driver 00:02:47.886 0000:ca:00.0 (8086 0a54): Already using the vfio-pci driver 00:02:47.886 0000:e7:01.0 (8086 0b25): Already using the vfio-pci driver 00:02:47.886 0000:e7:02.0 (8086 0cfe): Already using the vfio-pci driver 00:02:47.886 0000:f1:01.0 (8086 0b25): Already using the vfio-pci driver 00:02:47.886 04:00:01 -- setup/hugepages.sh@161 -- # verify_nr_hugepages 00:02:47.886 04:00:01 -- setup/hugepages.sh@89 -- # local node 00:02:47.886 04:00:01 -- setup/hugepages.sh@90 -- # local sorted_t 00:02:47.886 04:00:01 -- setup/hugepages.sh@91 -- # local sorted_s 00:02:47.886 04:00:01 -- setup/hugepages.sh@92 -- # local surp 00:02:47.886 04:00:01 -- setup/hugepages.sh@93 -- # local resv 00:02:47.886 04:00:01 -- setup/hugepages.sh@94 -- # local anon 00:02:47.886 04:00:01 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:02:47.886 04:00:01 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:02:47.886 04:00:01 -- setup/common.sh@17 -- # local get=AnonHugePages 00:02:47.886 04:00:01 -- setup/common.sh@18 -- # local node= 00:02:47.886 04:00:01 -- setup/common.sh@19 -- # local var val 00:02:47.886 04:00:01 -- setup/common.sh@20 -- # local mem_f mem 00:02:47.886 04:00:01 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:47.886 04:00:01 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:47.886 04:00:01 -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:47.886 04:00:01 -- setup/common.sh@28 -- # mapfile -t mem 00:02:47.886 04:00:01 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:47.886 04:00:01 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.886 04:00:01 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.886 04:00:01 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 258558476 kB' 'MemFree: 241597780 kB' 'MemAvailable: 245234624 kB' 'Buffers: 2696 kB' 'Cached: 10893296 kB' 'SwapCached: 0 kB' 'Active: 7044648 kB' 'Inactive: 4390304 kB' 'Active(anon): 6473832 kB' 'Inactive(anon): 0 kB' 'Active(file): 570816 kB' 'Inactive(file): 4390304 kB' 'Unevictable: 9000 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 548308 kB' 'Mapped: 211756 kB' 'Shmem: 5934872 kB' 'KReclaimable: 309552 kB' 'Slab: 954724 kB' 'SReclaimable: 309552 kB' 'SUnreclaim: 645172 kB' 'KernelStack: 24752 kB' 'PageTables: 8976 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 136618240 kB' 'Committed_AS: 8075564 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 329644 kB' 'VmallocChunk: 0 kB' 'Percpu: 104448 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 3262528 kB' 'DirectMap2M: 16437248 kB' 'DirectMap1G: 250609664 kB' 00:02:47.886 04:00:01 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:47.886 04:00:01 -- setup/common.sh@32 -- # continue 00:02:47.886 04:00:01 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.886 04:00:01 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.886 04:00:01 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:47.886 04:00:01 -- setup/common.sh@32 -- # continue 00:02:47.886 04:00:01 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.886 04:00:01 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.886 04:00:01 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:47.886 04:00:01 -- setup/common.sh@32 -- # continue 00:02:47.886 04:00:01 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.886 04:00:01 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.886 04:00:01 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:47.886 04:00:01 -- setup/common.sh@32 -- # continue 00:02:47.886 04:00:01 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.886 04:00:01 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.886 04:00:01 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:47.886 04:00:01 -- setup/common.sh@32 -- # continue 00:02:47.886 04:00:01 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.886 04:00:01 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.886 04:00:01 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:47.886 04:00:01 -- setup/common.sh@32 -- # continue 00:02:47.886 04:00:01 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.886 04:00:01 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.886 04:00:01 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:47.886 04:00:01 -- setup/common.sh@32 -- # continue 00:02:47.886 04:00:01 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.886 04:00:01 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.886 04:00:01 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:47.886 04:00:01 -- setup/common.sh@32 -- # continue 00:02:47.886 04:00:01 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.886 04:00:01 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.886 04:00:01 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:47.886 04:00:01 -- setup/common.sh@32 -- # continue 00:02:47.886 04:00:01 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.886 04:00:01 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.886 04:00:01 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:47.886 04:00:01 -- setup/common.sh@32 -- # continue 00:02:47.886 04:00:01 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.886 04:00:01 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.886 04:00:01 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:47.886 04:00:01 -- setup/common.sh@32 -- # continue 00:02:47.886 04:00:01 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.886 04:00:01 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.886 04:00:01 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:47.886 04:00:01 -- setup/common.sh@32 -- # continue 00:02:47.886 04:00:01 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.886 04:00:01 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.886 04:00:01 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:47.886 04:00:01 -- setup/common.sh@32 -- # continue 00:02:47.886 04:00:01 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.886 04:00:01 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.886 04:00:01 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:47.886 04:00:01 -- setup/common.sh@32 -- # continue 00:02:47.886 04:00:01 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.886 04:00:01 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.886 04:00:01 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:47.886 04:00:01 -- setup/common.sh@32 -- # continue 00:02:47.887 04:00:01 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.887 04:00:01 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.887 04:00:01 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:47.887 04:00:01 -- setup/common.sh@32 -- # continue 00:02:47.887 04:00:01 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.887 04:00:01 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.887 04:00:01 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:47.887 04:00:01 -- setup/common.sh@32 -- # continue 00:02:47.887 04:00:01 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.887 04:00:01 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.887 04:00:01 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:47.887 04:00:01 -- setup/common.sh@32 -- # continue 00:02:47.887 04:00:01 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.887 04:00:01 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.887 04:00:01 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:47.887 04:00:01 -- setup/common.sh@32 -- # continue 00:02:47.887 04:00:01 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.887 04:00:01 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.887 04:00:01 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:47.887 04:00:01 -- setup/common.sh@32 -- # continue 00:02:47.887 04:00:01 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.887 04:00:01 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.887 04:00:01 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:47.887 04:00:01 -- setup/common.sh@32 -- # continue 00:02:47.887 04:00:01 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.887 04:00:01 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.887 04:00:01 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:47.887 04:00:01 -- setup/common.sh@32 -- # continue 00:02:47.887 04:00:01 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.887 04:00:01 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.887 04:00:01 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:47.887 04:00:01 -- setup/common.sh@32 -- # continue 00:02:47.887 04:00:01 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.887 04:00:01 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.887 04:00:01 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:47.887 04:00:01 -- setup/common.sh@32 -- # continue 00:02:47.887 04:00:01 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.887 04:00:01 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.887 04:00:01 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:47.887 04:00:01 -- setup/common.sh@32 -- # continue 00:02:47.887 04:00:01 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.887 04:00:01 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.887 04:00:01 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:47.887 04:00:01 -- setup/common.sh@32 -- # continue 00:02:47.887 04:00:01 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.887 04:00:01 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.887 04:00:01 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:47.887 04:00:01 -- setup/common.sh@32 -- # continue 00:02:47.887 04:00:01 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.887 04:00:01 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.887 04:00:01 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:47.887 04:00:01 -- setup/common.sh@32 -- # continue 00:02:47.887 04:00:01 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.887 04:00:01 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.887 04:00:01 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:47.887 04:00:01 -- setup/common.sh@32 -- # continue 00:02:47.887 04:00:01 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.887 04:00:01 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.887 04:00:01 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:47.887 04:00:01 -- setup/common.sh@32 -- # continue 00:02:47.887 04:00:01 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.887 04:00:01 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.887 04:00:01 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:47.887 04:00:01 -- setup/common.sh@32 -- # continue 00:02:47.887 04:00:01 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.887 04:00:01 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.887 04:00:01 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:47.887 04:00:01 -- setup/common.sh@32 -- # continue 00:02:47.887 04:00:01 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.887 04:00:01 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.887 04:00:01 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:47.887 04:00:01 -- setup/common.sh@32 -- # continue 00:02:47.887 04:00:01 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.887 04:00:01 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.887 04:00:01 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:47.887 04:00:01 -- setup/common.sh@32 -- # continue 00:02:47.887 04:00:01 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.887 04:00:01 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.887 04:00:01 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:47.887 04:00:01 -- setup/common.sh@32 -- # continue 00:02:47.887 04:00:01 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.887 04:00:01 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.887 04:00:01 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:47.887 04:00:01 -- setup/common.sh@32 -- # continue 00:02:47.887 04:00:01 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.887 04:00:01 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.887 04:00:01 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:47.887 04:00:01 -- setup/common.sh@32 -- # continue 00:02:47.887 04:00:01 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.887 04:00:01 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.887 04:00:01 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:47.887 04:00:01 -- setup/common.sh@32 -- # continue 00:02:47.887 04:00:01 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.887 04:00:01 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.887 04:00:01 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:47.887 04:00:01 -- setup/common.sh@32 -- # continue 00:02:47.887 04:00:01 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.887 04:00:01 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.887 04:00:01 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:47.887 04:00:01 -- setup/common.sh@32 -- # continue 00:02:47.887 04:00:01 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.887 04:00:01 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.887 04:00:01 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:47.887 04:00:01 -- setup/common.sh@33 -- # echo 0 00:02:47.887 04:00:01 -- setup/common.sh@33 -- # return 0 00:02:47.887 04:00:01 -- setup/hugepages.sh@97 -- # anon=0 00:02:47.887 04:00:01 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:02:47.887 04:00:01 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:47.887 04:00:01 -- setup/common.sh@18 -- # local node= 00:02:47.887 04:00:01 -- setup/common.sh@19 -- # local var val 00:02:47.887 04:00:01 -- setup/common.sh@20 -- # local mem_f mem 00:02:47.887 04:00:01 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:47.887 04:00:01 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:47.887 04:00:01 -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:47.887 04:00:01 -- setup/common.sh@28 -- # mapfile -t mem 00:02:47.887 04:00:01 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:47.887 04:00:01 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.887 04:00:01 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.887 04:00:01 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 258558476 kB' 'MemFree: 241596976 kB' 'MemAvailable: 245233820 kB' 'Buffers: 2696 kB' 'Cached: 10893296 kB' 'SwapCached: 0 kB' 'Active: 7045564 kB' 'Inactive: 4390304 kB' 'Active(anon): 6474748 kB' 'Inactive(anon): 0 kB' 'Active(file): 570816 kB' 'Inactive(file): 4390304 kB' 'Unevictable: 9000 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 549224 kB' 'Mapped: 211756 kB' 'Shmem: 5934872 kB' 'KReclaimable: 309552 kB' 'Slab: 954820 kB' 'SReclaimable: 309552 kB' 'SUnreclaim: 645268 kB' 'KernelStack: 24896 kB' 'PageTables: 9644 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 136618240 kB' 'Committed_AS: 8075576 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 329724 kB' 'VmallocChunk: 0 kB' 'Percpu: 104448 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 3262528 kB' 'DirectMap2M: 16437248 kB' 'DirectMap1G: 250609664 kB' 00:02:47.887 04:00:01 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.887 04:00:01 -- setup/common.sh@32 -- # continue 00:02:47.887 04:00:01 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.887 04:00:01 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.887 04:00:01 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.887 04:00:01 -- setup/common.sh@32 -- # continue 00:02:47.887 04:00:01 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.887 04:00:01 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.887 04:00:01 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.887 04:00:01 -- setup/common.sh@32 -- # continue 00:02:47.887 04:00:01 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.887 04:00:01 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.887 04:00:01 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.887 04:00:01 -- setup/common.sh@32 -- # continue 00:02:47.887 04:00:01 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.887 04:00:01 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.887 04:00:01 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.887 04:00:01 -- setup/common.sh@32 -- # continue 00:02:47.887 04:00:01 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.887 04:00:01 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.887 04:00:01 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.887 04:00:01 -- setup/common.sh@32 -- # continue 00:02:47.887 04:00:01 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.888 04:00:01 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.888 04:00:01 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.888 04:00:01 -- setup/common.sh@32 -- # continue 00:02:47.888 04:00:01 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.888 04:00:01 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.888 04:00:01 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.888 04:00:01 -- setup/common.sh@32 -- # continue 00:02:47.888 04:00:01 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.888 04:00:01 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.888 04:00:01 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.888 04:00:01 -- setup/common.sh@32 -- # continue 00:02:47.888 04:00:01 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.888 04:00:01 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.888 04:00:01 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.888 04:00:01 -- setup/common.sh@32 -- # continue 00:02:47.888 04:00:01 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.888 04:00:01 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.888 04:00:01 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.888 04:00:01 -- setup/common.sh@32 -- # continue 00:02:47.888 04:00:01 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.888 04:00:01 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.888 04:00:01 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.888 04:00:01 -- setup/common.sh@32 -- # continue 00:02:47.888 04:00:01 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.888 04:00:01 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.888 04:00:01 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.888 04:00:01 -- setup/common.sh@32 -- # continue 00:02:47.888 04:00:01 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.888 04:00:01 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.888 04:00:01 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.888 04:00:01 -- setup/common.sh@32 -- # continue 00:02:47.888 04:00:01 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.888 04:00:01 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.888 04:00:01 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.888 04:00:01 -- setup/common.sh@32 -- # continue 00:02:47.888 04:00:01 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.888 04:00:01 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.888 04:00:01 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.888 04:00:01 -- setup/common.sh@32 -- # continue 00:02:47.888 04:00:01 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.888 04:00:01 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.888 04:00:01 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.888 04:00:01 -- setup/common.sh@32 -- # continue 00:02:47.888 04:00:01 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.888 04:00:01 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.888 04:00:01 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.888 04:00:01 -- setup/common.sh@32 -- # continue 00:02:47.888 04:00:01 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.888 04:00:01 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.888 04:00:01 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.888 04:00:01 -- setup/common.sh@32 -- # continue 00:02:47.888 04:00:01 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.888 04:00:01 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.888 04:00:01 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.888 04:00:01 -- setup/common.sh@32 -- # continue 00:02:47.888 04:00:01 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.888 04:00:01 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.888 04:00:01 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.888 04:00:01 -- setup/common.sh@32 -- # continue 00:02:47.888 04:00:01 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.888 04:00:01 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.888 04:00:01 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.888 04:00:01 -- setup/common.sh@32 -- # continue 00:02:47.888 04:00:01 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.888 04:00:01 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.888 04:00:01 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.888 04:00:01 -- setup/common.sh@32 -- # continue 00:02:47.888 04:00:01 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.888 04:00:01 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.888 04:00:01 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.888 04:00:01 -- setup/common.sh@32 -- # continue 00:02:47.888 04:00:01 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.888 04:00:01 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.888 04:00:01 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.888 04:00:01 -- setup/common.sh@32 -- # continue 00:02:47.888 04:00:01 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.888 04:00:01 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.888 04:00:01 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.888 04:00:01 -- setup/common.sh@32 -- # continue 00:02:47.888 04:00:01 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.888 04:00:01 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.888 04:00:01 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.888 04:00:01 -- setup/common.sh@32 -- # continue 00:02:47.888 04:00:01 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.888 04:00:01 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.888 04:00:01 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.888 04:00:01 -- setup/common.sh@32 -- # continue 00:02:47.888 04:00:01 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.888 04:00:01 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.888 04:00:01 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.888 04:00:01 -- setup/common.sh@32 -- # continue 00:02:47.888 04:00:01 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.888 04:00:01 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.888 04:00:01 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.888 04:00:01 -- setup/common.sh@32 -- # continue 00:02:47.888 04:00:01 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.888 04:00:01 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.888 04:00:01 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.888 04:00:01 -- setup/common.sh@32 -- # continue 00:02:47.888 04:00:01 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.888 04:00:01 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.888 04:00:01 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.888 04:00:01 -- setup/common.sh@32 -- # continue 00:02:47.888 04:00:01 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.888 04:00:01 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.888 04:00:01 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.888 04:00:01 -- setup/common.sh@32 -- # continue 00:02:47.888 04:00:01 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.888 04:00:01 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.888 04:00:01 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.888 04:00:01 -- setup/common.sh@32 -- # continue 00:02:47.888 04:00:01 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.888 04:00:01 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.888 04:00:01 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.888 04:00:01 -- setup/common.sh@32 -- # continue 00:02:47.888 04:00:01 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.888 04:00:01 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.888 04:00:01 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.888 04:00:01 -- setup/common.sh@32 -- # continue 00:02:47.888 04:00:01 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.888 04:00:01 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.888 04:00:01 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.888 04:00:01 -- setup/common.sh@32 -- # continue 00:02:47.888 04:00:01 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.888 04:00:01 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.888 04:00:01 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.888 04:00:01 -- setup/common.sh@32 -- # continue 00:02:47.888 04:00:01 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.888 04:00:01 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.888 04:00:01 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.888 04:00:01 -- setup/common.sh@32 -- # continue 00:02:47.888 04:00:01 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.888 04:00:01 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.888 04:00:01 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.888 04:00:01 -- setup/common.sh@32 -- # continue 00:02:47.888 04:00:01 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.888 04:00:01 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.888 04:00:01 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.888 04:00:01 -- setup/common.sh@32 -- # continue 00:02:47.888 04:00:01 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.888 04:00:01 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.888 04:00:01 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.888 04:00:01 -- setup/common.sh@32 -- # continue 00:02:47.888 04:00:01 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.888 04:00:01 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.888 04:00:01 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.888 04:00:01 -- setup/common.sh@32 -- # continue 00:02:47.889 04:00:01 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.889 04:00:01 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.889 04:00:01 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.889 04:00:01 -- setup/common.sh@32 -- # continue 00:02:47.889 04:00:01 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.889 04:00:01 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.889 04:00:01 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.889 04:00:01 -- setup/common.sh@32 -- # continue 00:02:47.889 04:00:01 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.889 04:00:01 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.889 04:00:01 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.889 04:00:01 -- setup/common.sh@32 -- # continue 00:02:47.889 04:00:01 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.889 04:00:01 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.889 04:00:01 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.889 04:00:01 -- setup/common.sh@32 -- # continue 00:02:47.889 04:00:01 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.889 04:00:01 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.889 04:00:01 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.889 04:00:01 -- setup/common.sh@32 -- # continue 00:02:47.889 04:00:01 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.889 04:00:01 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.889 04:00:01 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.889 04:00:01 -- setup/common.sh@32 -- # continue 00:02:47.889 04:00:01 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.889 04:00:01 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.889 04:00:01 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.889 04:00:01 -- setup/common.sh@32 -- # continue 00:02:47.889 04:00:01 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.889 04:00:01 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.889 04:00:01 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.889 04:00:01 -- setup/common.sh@32 -- # continue 00:02:47.889 04:00:01 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.889 04:00:01 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.889 04:00:01 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.889 04:00:01 -- setup/common.sh@33 -- # echo 0 00:02:47.889 04:00:01 -- setup/common.sh@33 -- # return 0 00:02:47.889 04:00:01 -- setup/hugepages.sh@99 -- # surp=0 00:02:47.889 04:00:01 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:02:47.889 04:00:01 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:02:47.889 04:00:01 -- setup/common.sh@18 -- # local node= 00:02:47.889 04:00:01 -- setup/common.sh@19 -- # local var val 00:02:47.889 04:00:01 -- setup/common.sh@20 -- # local mem_f mem 00:02:47.889 04:00:01 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:47.889 04:00:01 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:47.889 04:00:01 -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:47.889 04:00:01 -- setup/common.sh@28 -- # mapfile -t mem 00:02:47.889 04:00:01 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:47.889 04:00:01 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.889 04:00:01 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.889 04:00:01 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 258558476 kB' 'MemFree: 241595576 kB' 'MemAvailable: 245232420 kB' 'Buffers: 2696 kB' 'Cached: 10893308 kB' 'SwapCached: 0 kB' 'Active: 7039204 kB' 'Inactive: 4390304 kB' 'Active(anon): 6468388 kB' 'Inactive(anon): 0 kB' 'Active(file): 570816 kB' 'Inactive(file): 4390304 kB' 'Unevictable: 9000 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 542832 kB' 'Mapped: 211752 kB' 'Shmem: 5934884 kB' 'KReclaimable: 309552 kB' 'Slab: 954792 kB' 'SReclaimable: 309552 kB' 'SUnreclaim: 645240 kB' 'KernelStack: 24896 kB' 'PageTables: 9480 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 136618240 kB' 'Committed_AS: 8066480 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 329592 kB' 'VmallocChunk: 0 kB' 'Percpu: 104448 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 3262528 kB' 'DirectMap2M: 16437248 kB' 'DirectMap1G: 250609664 kB' 00:02:47.889 04:00:01 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:47.889 04:00:01 -- setup/common.sh@32 -- # continue 00:02:47.889 04:00:01 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.889 04:00:01 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.889 04:00:01 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:47.889 04:00:01 -- setup/common.sh@32 -- # continue 00:02:47.889 04:00:01 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.889 04:00:01 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.889 04:00:01 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:47.889 04:00:01 -- setup/common.sh@32 -- # continue 00:02:47.889 04:00:01 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.889 04:00:01 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.889 04:00:01 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:47.889 04:00:01 -- setup/common.sh@32 -- # continue 00:02:47.889 04:00:01 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.889 04:00:01 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.889 04:00:01 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:47.889 04:00:01 -- setup/common.sh@32 -- # continue 00:02:47.889 04:00:01 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.889 04:00:01 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.889 04:00:01 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:47.889 04:00:01 -- setup/common.sh@32 -- # continue 00:02:47.889 04:00:01 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.889 04:00:01 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.889 04:00:01 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:47.889 04:00:01 -- setup/common.sh@32 -- # continue 00:02:47.889 04:00:01 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.889 04:00:01 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.889 04:00:01 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:47.889 04:00:01 -- setup/common.sh@32 -- # continue 00:02:47.889 04:00:01 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.889 04:00:01 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.889 04:00:01 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:47.889 04:00:01 -- setup/common.sh@32 -- # continue 00:02:47.889 04:00:01 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.889 04:00:01 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.889 04:00:01 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:47.889 04:00:01 -- setup/common.sh@32 -- # continue 00:02:47.889 04:00:01 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.889 04:00:01 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.889 04:00:01 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:47.889 04:00:01 -- setup/common.sh@32 -- # continue 00:02:47.889 04:00:01 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.889 04:00:01 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.889 04:00:01 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:47.889 04:00:01 -- setup/common.sh@32 -- # continue 00:02:47.889 04:00:01 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.889 04:00:01 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.889 04:00:01 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:47.889 04:00:01 -- setup/common.sh@32 -- # continue 00:02:47.889 04:00:01 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.889 04:00:01 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.889 04:00:01 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:47.889 04:00:01 -- setup/common.sh@32 -- # continue 00:02:47.889 04:00:01 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.889 04:00:01 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.889 04:00:01 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:47.889 04:00:01 -- setup/common.sh@32 -- # continue 00:02:47.889 04:00:01 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.889 04:00:01 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.889 04:00:01 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:47.889 04:00:01 -- setup/common.sh@32 -- # continue 00:02:47.889 04:00:01 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.889 04:00:01 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.889 04:00:01 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:47.889 04:00:01 -- setup/common.sh@32 -- # continue 00:02:47.889 04:00:01 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.889 04:00:01 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.890 04:00:01 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:47.890 04:00:01 -- setup/common.sh@32 -- # continue 00:02:47.890 04:00:01 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.890 04:00:01 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.890 04:00:01 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:47.890 04:00:01 -- setup/common.sh@32 -- # continue 00:02:47.890 04:00:01 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.890 04:00:01 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.890 04:00:01 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:47.890 04:00:01 -- setup/common.sh@32 -- # continue 00:02:47.890 04:00:01 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.890 04:00:01 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.890 04:00:01 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:47.890 04:00:01 -- setup/common.sh@32 -- # continue 00:02:47.890 04:00:01 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.890 04:00:01 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.890 04:00:01 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:47.890 04:00:01 -- setup/common.sh@32 -- # continue 00:02:47.890 04:00:01 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.890 04:00:01 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.890 04:00:01 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:47.890 04:00:01 -- setup/common.sh@32 -- # continue 00:02:47.890 04:00:01 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.890 04:00:01 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.890 04:00:01 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:47.890 04:00:01 -- setup/common.sh@32 -- # continue 00:02:47.890 04:00:01 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.890 04:00:01 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.890 04:00:01 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:47.890 04:00:01 -- setup/common.sh@32 -- # continue 00:02:47.890 04:00:01 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.890 04:00:01 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.890 04:00:01 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:47.890 04:00:01 -- setup/common.sh@32 -- # continue 00:02:47.890 04:00:01 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.890 04:00:01 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.890 04:00:01 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:47.890 04:00:01 -- setup/common.sh@32 -- # continue 00:02:47.890 04:00:01 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.890 04:00:01 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.890 04:00:01 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:47.890 04:00:01 -- setup/common.sh@32 -- # continue 00:02:47.890 04:00:01 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.890 04:00:01 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.890 04:00:01 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:47.890 04:00:01 -- setup/common.sh@32 -- # continue 00:02:47.890 04:00:01 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.890 04:00:01 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.890 04:00:01 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:47.890 04:00:01 -- setup/common.sh@32 -- # continue 00:02:47.890 04:00:01 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.890 04:00:01 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.890 04:00:01 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:47.890 04:00:01 -- setup/common.sh@32 -- # continue 00:02:47.890 04:00:01 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.890 04:00:01 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.890 04:00:01 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:47.890 04:00:01 -- setup/common.sh@32 -- # continue 00:02:47.890 04:00:01 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.890 04:00:01 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.890 04:00:01 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:47.890 04:00:02 -- setup/common.sh@32 -- # continue 00:02:47.890 04:00:02 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.890 04:00:02 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.890 04:00:02 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:47.890 04:00:02 -- setup/common.sh@32 -- # continue 00:02:47.890 04:00:02 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.890 04:00:02 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.890 04:00:02 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:47.890 04:00:02 -- setup/common.sh@32 -- # continue 00:02:47.890 04:00:02 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.890 04:00:02 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.890 04:00:02 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:47.890 04:00:02 -- setup/common.sh@32 -- # continue 00:02:47.890 04:00:02 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.890 04:00:02 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.890 04:00:02 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:47.890 04:00:02 -- setup/common.sh@32 -- # continue 00:02:47.890 04:00:02 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.890 04:00:02 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.890 04:00:02 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:47.890 04:00:02 -- setup/common.sh@32 -- # continue 00:02:47.890 04:00:02 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.890 04:00:02 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.890 04:00:02 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:47.890 04:00:02 -- setup/common.sh@32 -- # continue 00:02:47.890 04:00:02 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.890 04:00:02 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.890 04:00:02 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:47.890 04:00:02 -- setup/common.sh@32 -- # continue 00:02:47.890 04:00:02 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.890 04:00:02 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.890 04:00:02 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:47.890 04:00:02 -- setup/common.sh@32 -- # continue 00:02:47.890 04:00:02 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.890 04:00:02 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.890 04:00:02 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:47.890 04:00:02 -- setup/common.sh@32 -- # continue 00:02:47.890 04:00:02 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.890 04:00:02 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.890 04:00:02 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:47.890 04:00:02 -- setup/common.sh@32 -- # continue 00:02:47.890 04:00:02 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.890 04:00:02 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.890 04:00:02 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:47.890 04:00:02 -- setup/common.sh@32 -- # continue 00:02:47.890 04:00:02 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.890 04:00:02 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.890 04:00:02 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:47.890 04:00:02 -- setup/common.sh@32 -- # continue 00:02:47.890 04:00:02 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.890 04:00:02 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.890 04:00:02 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:47.890 04:00:02 -- setup/common.sh@32 -- # continue 00:02:47.890 04:00:02 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.890 04:00:02 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.890 04:00:02 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:47.890 04:00:02 -- setup/common.sh@32 -- # continue 00:02:47.890 04:00:02 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.890 04:00:02 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.890 04:00:02 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:47.890 04:00:02 -- setup/common.sh@32 -- # continue 00:02:47.890 04:00:02 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.890 04:00:02 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.890 04:00:02 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:47.890 04:00:02 -- setup/common.sh@32 -- # continue 00:02:47.890 04:00:02 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.890 04:00:02 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.890 04:00:02 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:47.890 04:00:02 -- setup/common.sh@32 -- # continue 00:02:47.890 04:00:02 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.890 04:00:02 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.890 04:00:02 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:47.890 04:00:02 -- setup/common.sh@33 -- # echo 0 00:02:47.890 04:00:02 -- setup/common.sh@33 -- # return 0 00:02:47.891 04:00:02 -- setup/hugepages.sh@100 -- # resv=0 00:02:47.891 04:00:02 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1025 00:02:47.891 nr_hugepages=1025 00:02:47.891 04:00:02 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:02:47.891 resv_hugepages=0 00:02:47.891 04:00:02 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:02:47.891 surplus_hugepages=0 00:02:47.891 04:00:02 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:02:47.891 anon_hugepages=0 00:02:47.891 04:00:02 -- setup/hugepages.sh@107 -- # (( 1025 == nr_hugepages + surp + resv )) 00:02:47.891 04:00:02 -- setup/hugepages.sh@109 -- # (( 1025 == nr_hugepages )) 00:02:47.891 04:00:02 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:02:47.891 04:00:02 -- setup/common.sh@17 -- # local get=HugePages_Total 00:02:47.891 04:00:02 -- setup/common.sh@18 -- # local node= 00:02:47.891 04:00:02 -- setup/common.sh@19 -- # local var val 00:02:47.891 04:00:02 -- setup/common.sh@20 -- # local mem_f mem 00:02:47.891 04:00:02 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:47.891 04:00:02 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:47.891 04:00:02 -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:47.891 04:00:02 -- setup/common.sh@28 -- # mapfile -t mem 00:02:47.891 04:00:02 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:47.891 04:00:02 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.891 04:00:02 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.891 04:00:02 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 258558476 kB' 'MemFree: 241596716 kB' 'MemAvailable: 245233560 kB' 'Buffers: 2696 kB' 'Cached: 10893324 kB' 'SwapCached: 0 kB' 'Active: 7038500 kB' 'Inactive: 4390304 kB' 'Active(anon): 6467684 kB' 'Inactive(anon): 0 kB' 'Active(file): 570816 kB' 'Inactive(file): 4390304 kB' 'Unevictable: 9000 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 542112 kB' 'Mapped: 211600 kB' 'Shmem: 5934900 kB' 'KReclaimable: 309552 kB' 'Slab: 954928 kB' 'SReclaimable: 309552 kB' 'SUnreclaim: 645376 kB' 'KernelStack: 24720 kB' 'PageTables: 9444 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 136618240 kB' 'Committed_AS: 8068004 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 329576 kB' 'VmallocChunk: 0 kB' 'Percpu: 104448 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 3262528 kB' 'DirectMap2M: 16437248 kB' 'DirectMap1G: 250609664 kB' 00:02:47.891 04:00:02 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:47.891 04:00:02 -- setup/common.sh@32 -- # continue 00:02:47.891 04:00:02 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.891 04:00:02 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.891 04:00:02 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:47.891 04:00:02 -- setup/common.sh@32 -- # continue 00:02:47.891 04:00:02 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.891 04:00:02 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.891 04:00:02 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:47.891 04:00:02 -- setup/common.sh@32 -- # continue 00:02:47.891 04:00:02 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.891 04:00:02 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.891 04:00:02 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:47.891 04:00:02 -- setup/common.sh@32 -- # continue 00:02:47.891 04:00:02 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.891 04:00:02 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.891 04:00:02 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:47.891 04:00:02 -- setup/common.sh@32 -- # continue 00:02:47.891 04:00:02 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.891 04:00:02 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.891 04:00:02 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:47.891 04:00:02 -- setup/common.sh@32 -- # continue 00:02:47.891 04:00:02 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.891 04:00:02 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.891 04:00:02 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:47.891 04:00:02 -- setup/common.sh@32 -- # continue 00:02:47.891 04:00:02 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.891 04:00:02 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.891 04:00:02 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:47.891 04:00:02 -- setup/common.sh@32 -- # continue 00:02:47.891 04:00:02 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.891 04:00:02 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.891 04:00:02 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:47.891 04:00:02 -- setup/common.sh@32 -- # continue 00:02:47.891 04:00:02 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.891 04:00:02 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.891 04:00:02 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:47.891 04:00:02 -- setup/common.sh@32 -- # continue 00:02:47.891 04:00:02 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.891 04:00:02 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.891 04:00:02 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:47.891 04:00:02 -- setup/common.sh@32 -- # continue 00:02:47.891 04:00:02 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.891 04:00:02 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.891 04:00:02 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:47.891 04:00:02 -- setup/common.sh@32 -- # continue 00:02:47.891 04:00:02 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.891 04:00:02 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.891 04:00:02 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:47.891 04:00:02 -- setup/common.sh@32 -- # continue 00:02:47.891 04:00:02 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.891 04:00:02 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.891 04:00:02 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:47.891 04:00:02 -- setup/common.sh@32 -- # continue 00:02:47.891 04:00:02 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.891 04:00:02 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.891 04:00:02 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:47.891 04:00:02 -- setup/common.sh@32 -- # continue 00:02:47.891 04:00:02 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.891 04:00:02 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.891 04:00:02 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:47.891 04:00:02 -- setup/common.sh@32 -- # continue 00:02:47.891 04:00:02 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.891 04:00:02 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.891 04:00:02 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:47.891 04:00:02 -- setup/common.sh@32 -- # continue 00:02:47.891 04:00:02 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.891 04:00:02 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.891 04:00:02 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:47.891 04:00:02 -- setup/common.sh@32 -- # continue 00:02:47.891 04:00:02 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.891 04:00:02 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.891 04:00:02 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:47.891 04:00:02 -- setup/common.sh@32 -- # continue 00:02:47.891 04:00:02 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.892 04:00:02 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.892 04:00:02 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:47.892 04:00:02 -- setup/common.sh@32 -- # continue 00:02:47.892 04:00:02 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.892 04:00:02 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.892 04:00:02 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:47.892 04:00:02 -- setup/common.sh@32 -- # continue 00:02:47.892 04:00:02 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.892 04:00:02 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.892 04:00:02 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:47.892 04:00:02 -- setup/common.sh@32 -- # continue 00:02:47.892 04:00:02 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.892 04:00:02 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.892 04:00:02 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:47.892 04:00:02 -- setup/common.sh@32 -- # continue 00:02:47.892 04:00:02 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.892 04:00:02 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.892 04:00:02 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:47.892 04:00:02 -- setup/common.sh@32 -- # continue 00:02:47.892 04:00:02 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.892 04:00:02 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.892 04:00:02 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:47.892 04:00:02 -- setup/common.sh@32 -- # continue 00:02:47.892 04:00:02 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.892 04:00:02 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.892 04:00:02 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:47.892 04:00:02 -- setup/common.sh@32 -- # continue 00:02:47.892 04:00:02 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.892 04:00:02 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.892 04:00:02 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:47.892 04:00:02 -- setup/common.sh@32 -- # continue 00:02:47.892 04:00:02 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.892 04:00:02 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.892 04:00:02 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:47.892 04:00:02 -- setup/common.sh@32 -- # continue 00:02:47.892 04:00:02 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.892 04:00:02 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.892 04:00:02 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:47.892 04:00:02 -- setup/common.sh@32 -- # continue 00:02:47.892 04:00:02 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.892 04:00:02 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.892 04:00:02 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:47.892 04:00:02 -- setup/common.sh@32 -- # continue 00:02:47.892 04:00:02 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.892 04:00:02 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.892 04:00:02 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:47.892 04:00:02 -- setup/common.sh@32 -- # continue 00:02:47.892 04:00:02 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.892 04:00:02 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.892 04:00:02 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:47.892 04:00:02 -- setup/common.sh@32 -- # continue 00:02:47.892 04:00:02 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.892 04:00:02 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.892 04:00:02 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:47.892 04:00:02 -- setup/common.sh@32 -- # continue 00:02:47.892 04:00:02 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.892 04:00:02 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.892 04:00:02 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:47.892 04:00:02 -- setup/common.sh@32 -- # continue 00:02:47.892 04:00:02 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.892 04:00:02 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.892 04:00:02 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:47.892 04:00:02 -- setup/common.sh@32 -- # continue 00:02:47.892 04:00:02 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.892 04:00:02 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.892 04:00:02 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:47.892 04:00:02 -- setup/common.sh@32 -- # continue 00:02:47.892 04:00:02 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.892 04:00:02 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.892 04:00:02 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:47.892 04:00:02 -- setup/common.sh@32 -- # continue 00:02:47.892 04:00:02 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.892 04:00:02 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.892 04:00:02 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:47.892 04:00:02 -- setup/common.sh@32 -- # continue 00:02:47.892 04:00:02 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.892 04:00:02 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.892 04:00:02 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:47.892 04:00:02 -- setup/common.sh@32 -- # continue 00:02:47.892 04:00:02 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.892 04:00:02 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.892 04:00:02 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:47.892 04:00:02 -- setup/common.sh@32 -- # continue 00:02:47.892 04:00:02 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.892 04:00:02 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.892 04:00:02 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:47.892 04:00:02 -- setup/common.sh@32 -- # continue 00:02:47.892 04:00:02 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.892 04:00:02 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.892 04:00:02 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:47.892 04:00:02 -- setup/common.sh@32 -- # continue 00:02:47.892 04:00:02 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.892 04:00:02 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.892 04:00:02 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:47.892 04:00:02 -- setup/common.sh@32 -- # continue 00:02:47.892 04:00:02 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.892 04:00:02 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.892 04:00:02 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:47.892 04:00:02 -- setup/common.sh@32 -- # continue 00:02:47.892 04:00:02 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.892 04:00:02 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.892 04:00:02 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:47.892 04:00:02 -- setup/common.sh@32 -- # continue 00:02:47.892 04:00:02 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.892 04:00:02 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.892 04:00:02 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:47.892 04:00:02 -- setup/common.sh@32 -- # continue 00:02:47.892 04:00:02 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.892 04:00:02 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.892 04:00:02 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:47.892 04:00:02 -- setup/common.sh@32 -- # continue 00:02:47.892 04:00:02 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.892 04:00:02 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.892 04:00:02 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:47.892 04:00:02 -- setup/common.sh@32 -- # continue 00:02:47.892 04:00:02 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.892 04:00:02 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.892 04:00:02 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:47.892 04:00:02 -- setup/common.sh@33 -- # echo 1025 00:02:47.892 04:00:02 -- setup/common.sh@33 -- # return 0 00:02:47.892 04:00:02 -- setup/hugepages.sh@110 -- # (( 1025 == nr_hugepages + surp + resv )) 00:02:47.892 04:00:02 -- setup/hugepages.sh@112 -- # get_nodes 00:02:47.892 04:00:02 -- setup/hugepages.sh@27 -- # local node 00:02:47.892 04:00:02 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:47.892 04:00:02 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:02:47.892 04:00:02 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:47.892 04:00:02 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=513 00:02:47.892 04:00:02 -- setup/hugepages.sh@32 -- # no_nodes=2 00:02:47.892 04:00:02 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:02:47.892 04:00:02 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:02:47.892 04:00:02 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:02:47.892 04:00:02 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:02:47.892 04:00:02 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:47.892 04:00:02 -- setup/common.sh@18 -- # local node=0 00:02:47.892 04:00:02 -- setup/common.sh@19 -- # local var val 00:02:47.892 04:00:02 -- setup/common.sh@20 -- # local mem_f mem 00:02:47.892 04:00:02 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:47.892 04:00:02 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:02:47.892 04:00:02 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:02:47.892 04:00:02 -- setup/common.sh@28 -- # mapfile -t mem 00:02:47.892 04:00:02 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:47.892 04:00:02 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.892 04:00:02 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.893 04:00:02 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 131816228 kB' 'MemFree: 119465728 kB' 'MemUsed: 12350500 kB' 'SwapCached: 0 kB' 'Active: 5630612 kB' 'Inactive: 3992136 kB' 'Active(anon): 5217736 kB' 'Inactive(anon): 0 kB' 'Active(file): 412876 kB' 'Inactive(file): 3992136 kB' 'Unevictable: 9000 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 9295900 kB' 'Mapped: 162744 kB' 'AnonPages: 335984 kB' 'Shmem: 4890888 kB' 'KernelStack: 13736 kB' 'PageTables: 6184 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 211376 kB' 'Slab: 549076 kB' 'SReclaimable: 211376 kB' 'SUnreclaim: 337700 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:02:47.893 04:00:02 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.893 04:00:02 -- setup/common.sh@32 -- # continue 00:02:47.893 04:00:02 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.893 04:00:02 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.893 04:00:02 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.893 04:00:02 -- setup/common.sh@32 -- # continue 00:02:47.893 04:00:02 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.893 04:00:02 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.893 04:00:02 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.893 04:00:02 -- setup/common.sh@32 -- # continue 00:02:47.893 04:00:02 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.893 04:00:02 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.893 04:00:02 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.893 04:00:02 -- setup/common.sh@32 -- # continue 00:02:47.893 04:00:02 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.893 04:00:02 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.893 04:00:02 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.893 04:00:02 -- setup/common.sh@32 -- # continue 00:02:47.893 04:00:02 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.893 04:00:02 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.893 04:00:02 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.893 04:00:02 -- setup/common.sh@32 -- # continue 00:02:47.893 04:00:02 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.893 04:00:02 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.893 04:00:02 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.893 04:00:02 -- setup/common.sh@32 -- # continue 00:02:47.893 04:00:02 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.893 04:00:02 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.893 04:00:02 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.893 04:00:02 -- setup/common.sh@32 -- # continue 00:02:47.893 04:00:02 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.893 04:00:02 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.893 04:00:02 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.893 04:00:02 -- setup/common.sh@32 -- # continue 00:02:47.893 04:00:02 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.893 04:00:02 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.893 04:00:02 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.893 04:00:02 -- setup/common.sh@32 -- # continue 00:02:47.893 04:00:02 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.893 04:00:02 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.893 04:00:02 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.893 04:00:02 -- setup/common.sh@32 -- # continue 00:02:47.893 04:00:02 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.893 04:00:02 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.893 04:00:02 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.893 04:00:02 -- setup/common.sh@32 -- # continue 00:02:47.893 04:00:02 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.893 04:00:02 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.893 04:00:02 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.893 04:00:02 -- setup/common.sh@32 -- # continue 00:02:47.893 04:00:02 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.893 04:00:02 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.893 04:00:02 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.893 04:00:02 -- setup/common.sh@32 -- # continue 00:02:47.893 04:00:02 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.893 04:00:02 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.893 04:00:02 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.893 04:00:02 -- setup/common.sh@32 -- # continue 00:02:47.893 04:00:02 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.893 04:00:02 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.893 04:00:02 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.893 04:00:02 -- setup/common.sh@32 -- # continue 00:02:47.893 04:00:02 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.893 04:00:02 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.893 04:00:02 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.893 04:00:02 -- setup/common.sh@32 -- # continue 00:02:47.893 04:00:02 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.893 04:00:02 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.893 04:00:02 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.893 04:00:02 -- setup/common.sh@32 -- # continue 00:02:47.893 04:00:02 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.893 04:00:02 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.893 04:00:02 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.893 04:00:02 -- setup/common.sh@32 -- # continue 00:02:47.893 04:00:02 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.893 04:00:02 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.893 04:00:02 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.893 04:00:02 -- setup/common.sh@32 -- # continue 00:02:47.893 04:00:02 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.893 04:00:02 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.893 04:00:02 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.893 04:00:02 -- setup/common.sh@32 -- # continue 00:02:47.893 04:00:02 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.893 04:00:02 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.893 04:00:02 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.893 04:00:02 -- setup/common.sh@32 -- # continue 00:02:47.893 04:00:02 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.893 04:00:02 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.893 04:00:02 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.893 04:00:02 -- setup/common.sh@32 -- # continue 00:02:47.893 04:00:02 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.893 04:00:02 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.893 04:00:02 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.893 04:00:02 -- setup/common.sh@32 -- # continue 00:02:47.893 04:00:02 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.893 04:00:02 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.893 04:00:02 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.893 04:00:02 -- setup/common.sh@32 -- # continue 00:02:47.893 04:00:02 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.893 04:00:02 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.893 04:00:02 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.893 04:00:02 -- setup/common.sh@32 -- # continue 00:02:47.893 04:00:02 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.893 04:00:02 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.893 04:00:02 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.893 04:00:02 -- setup/common.sh@32 -- # continue 00:02:47.893 04:00:02 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.893 04:00:02 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.893 04:00:02 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.893 04:00:02 -- setup/common.sh@32 -- # continue 00:02:47.893 04:00:02 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.893 04:00:02 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.893 04:00:02 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.893 04:00:02 -- setup/common.sh@32 -- # continue 00:02:47.893 04:00:02 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.893 04:00:02 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.893 04:00:02 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.893 04:00:02 -- setup/common.sh@32 -- # continue 00:02:47.893 04:00:02 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.893 04:00:02 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.893 04:00:02 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.893 04:00:02 -- setup/common.sh@32 -- # continue 00:02:47.893 04:00:02 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.893 04:00:02 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.893 04:00:02 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.893 04:00:02 -- setup/common.sh@32 -- # continue 00:02:47.893 04:00:02 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.893 04:00:02 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.893 04:00:02 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.893 04:00:02 -- setup/common.sh@32 -- # continue 00:02:47.893 04:00:02 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.893 04:00:02 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.893 04:00:02 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.893 04:00:02 -- setup/common.sh@32 -- # continue 00:02:47.893 04:00:02 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.893 04:00:02 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.893 04:00:02 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.893 04:00:02 -- setup/common.sh@32 -- # continue 00:02:47.893 04:00:02 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.893 04:00:02 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.893 04:00:02 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.893 04:00:02 -- setup/common.sh@32 -- # continue 00:02:47.894 04:00:02 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.894 04:00:02 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.894 04:00:02 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.894 04:00:02 -- setup/common.sh@33 -- # echo 0 00:02:47.894 04:00:02 -- setup/common.sh@33 -- # return 0 00:02:47.894 04:00:02 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:02:47.894 04:00:02 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:02:47.894 04:00:02 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:02:47.894 04:00:02 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:02:47.894 04:00:02 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:47.894 04:00:02 -- setup/common.sh@18 -- # local node=1 00:02:47.894 04:00:02 -- setup/common.sh@19 -- # local var val 00:02:47.894 04:00:02 -- setup/common.sh@20 -- # local mem_f mem 00:02:47.894 04:00:02 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:47.894 04:00:02 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:02:47.894 04:00:02 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:02:47.894 04:00:02 -- setup/common.sh@28 -- # mapfile -t mem 00:02:47.894 04:00:02 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:47.894 04:00:02 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.894 04:00:02 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.894 04:00:02 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126742248 kB' 'MemFree: 122132604 kB' 'MemUsed: 4609644 kB' 'SwapCached: 0 kB' 'Active: 1408012 kB' 'Inactive: 398168 kB' 'Active(anon): 1250072 kB' 'Inactive(anon): 0 kB' 'Active(file): 157940 kB' 'Inactive(file): 398168 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 1600136 kB' 'Mapped: 48856 kB' 'AnonPages: 206180 kB' 'Shmem: 1044028 kB' 'KernelStack: 11048 kB' 'PageTables: 2952 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 98176 kB' 'Slab: 405820 kB' 'SReclaimable: 98176 kB' 'SUnreclaim: 307644 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 513' 'HugePages_Free: 513' 'HugePages_Surp: 0' 00:02:47.894 04:00:02 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.894 04:00:02 -- setup/common.sh@32 -- # continue 00:02:47.894 04:00:02 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.894 04:00:02 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.894 04:00:02 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.894 04:00:02 -- setup/common.sh@32 -- # continue 00:02:47.894 04:00:02 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.894 04:00:02 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.894 04:00:02 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.894 04:00:02 -- setup/common.sh@32 -- # continue 00:02:47.894 04:00:02 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.894 04:00:02 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.894 04:00:02 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.894 04:00:02 -- setup/common.sh@32 -- # continue 00:02:47.894 04:00:02 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.894 04:00:02 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.894 04:00:02 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.894 04:00:02 -- setup/common.sh@32 -- # continue 00:02:47.894 04:00:02 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.894 04:00:02 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.894 04:00:02 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.894 04:00:02 -- setup/common.sh@32 -- # continue 00:02:47.894 04:00:02 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.894 04:00:02 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.894 04:00:02 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.894 04:00:02 -- setup/common.sh@32 -- # continue 00:02:47.894 04:00:02 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.894 04:00:02 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.894 04:00:02 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.894 04:00:02 -- setup/common.sh@32 -- # continue 00:02:47.894 04:00:02 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.894 04:00:02 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.894 04:00:02 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.894 04:00:02 -- setup/common.sh@32 -- # continue 00:02:47.894 04:00:02 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.894 04:00:02 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.894 04:00:02 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.894 04:00:02 -- setup/common.sh@32 -- # continue 00:02:47.894 04:00:02 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.894 04:00:02 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.894 04:00:02 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.894 04:00:02 -- setup/common.sh@32 -- # continue 00:02:47.894 04:00:02 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.894 04:00:02 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.894 04:00:02 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.894 04:00:02 -- setup/common.sh@32 -- # continue 00:02:47.894 04:00:02 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.894 04:00:02 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.894 04:00:02 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.894 04:00:02 -- setup/common.sh@32 -- # continue 00:02:47.894 04:00:02 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.894 04:00:02 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.894 04:00:02 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.894 04:00:02 -- setup/common.sh@32 -- # continue 00:02:47.894 04:00:02 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.894 04:00:02 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.894 04:00:02 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.894 04:00:02 -- setup/common.sh@32 -- # continue 00:02:47.894 04:00:02 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.894 04:00:02 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.894 04:00:02 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.894 04:00:02 -- setup/common.sh@32 -- # continue 00:02:47.894 04:00:02 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.894 04:00:02 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.894 04:00:02 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.894 04:00:02 -- setup/common.sh@32 -- # continue 00:02:47.894 04:00:02 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.894 04:00:02 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.894 04:00:02 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.894 04:00:02 -- setup/common.sh@32 -- # continue 00:02:47.894 04:00:02 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.894 04:00:02 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.894 04:00:02 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.894 04:00:02 -- setup/common.sh@32 -- # continue 00:02:47.894 04:00:02 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.894 04:00:02 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.894 04:00:02 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.894 04:00:02 -- setup/common.sh@32 -- # continue 00:02:47.894 04:00:02 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.894 04:00:02 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.894 04:00:02 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.894 04:00:02 -- setup/common.sh@32 -- # continue 00:02:47.894 04:00:02 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.894 04:00:02 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.894 04:00:02 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.894 04:00:02 -- setup/common.sh@32 -- # continue 00:02:47.894 04:00:02 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.894 04:00:02 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.894 04:00:02 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.894 04:00:02 -- setup/common.sh@32 -- # continue 00:02:47.894 04:00:02 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.894 04:00:02 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.894 04:00:02 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.894 04:00:02 -- setup/common.sh@32 -- # continue 00:02:47.894 04:00:02 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.894 04:00:02 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.894 04:00:02 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.894 04:00:02 -- setup/common.sh@32 -- # continue 00:02:47.894 04:00:02 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.894 04:00:02 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.894 04:00:02 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.894 04:00:02 -- setup/common.sh@32 -- # continue 00:02:47.894 04:00:02 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.894 04:00:02 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.894 04:00:02 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.894 04:00:02 -- setup/common.sh@32 -- # continue 00:02:47.894 04:00:02 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.894 04:00:02 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.894 04:00:02 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.894 04:00:02 -- setup/common.sh@32 -- # continue 00:02:47.894 04:00:02 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.894 04:00:02 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.894 04:00:02 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.894 04:00:02 -- setup/common.sh@32 -- # continue 00:02:47.894 04:00:02 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.894 04:00:02 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.894 04:00:02 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.894 04:00:02 -- setup/common.sh@32 -- # continue 00:02:47.894 04:00:02 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.895 04:00:02 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.895 04:00:02 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.895 04:00:02 -- setup/common.sh@32 -- # continue 00:02:47.895 04:00:02 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.895 04:00:02 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.895 04:00:02 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.895 04:00:02 -- setup/common.sh@32 -- # continue 00:02:47.895 04:00:02 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.895 04:00:02 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.895 04:00:02 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.895 04:00:02 -- setup/common.sh@32 -- # continue 00:02:47.895 04:00:02 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.895 04:00:02 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.895 04:00:02 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.895 04:00:02 -- setup/common.sh@32 -- # continue 00:02:47.895 04:00:02 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.895 04:00:02 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.895 04:00:02 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.895 04:00:02 -- setup/common.sh@32 -- # continue 00:02:47.895 04:00:02 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.895 04:00:02 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.895 04:00:02 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.895 04:00:02 -- setup/common.sh@32 -- # continue 00:02:47.895 04:00:02 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.895 04:00:02 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.895 04:00:02 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.895 04:00:02 -- setup/common.sh@33 -- # echo 0 00:02:47.895 04:00:02 -- setup/common.sh@33 -- # return 0 00:02:47.895 04:00:02 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:02:47.895 04:00:02 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:02:47.895 04:00:02 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:02:47.895 04:00:02 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:02:47.895 04:00:02 -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 513' 00:02:47.895 node0=512 expecting 513 00:02:47.895 04:00:02 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:02:47.895 04:00:02 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:02:47.895 04:00:02 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:02:47.895 04:00:02 -- setup/hugepages.sh@128 -- # echo 'node1=513 expecting 512' 00:02:47.895 node1=513 expecting 512 00:02:47.895 04:00:02 -- setup/hugepages.sh@130 -- # [[ 512 513 == \5\1\2\ \5\1\3 ]] 00:02:47.895 00:02:47.895 real 0m3.122s 00:02:47.895 user 0m1.075s 00:02:47.895 sys 0m1.915s 00:02:47.895 04:00:02 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:02:47.895 04:00:02 -- common/autotest_common.sh@10 -- # set +x 00:02:47.895 ************************************ 00:02:47.895 END TEST odd_alloc 00:02:47.895 ************************************ 00:02:47.895 04:00:02 -- setup/hugepages.sh@214 -- # run_test custom_alloc custom_alloc 00:02:47.895 04:00:02 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:02:47.895 04:00:02 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:02:47.895 04:00:02 -- common/autotest_common.sh@10 -- # set +x 00:02:47.895 ************************************ 00:02:47.895 START TEST custom_alloc 00:02:47.895 ************************************ 00:02:47.895 04:00:02 -- common/autotest_common.sh@1104 -- # custom_alloc 00:02:47.895 04:00:02 -- setup/hugepages.sh@167 -- # local IFS=, 00:02:47.895 04:00:02 -- setup/hugepages.sh@169 -- # local node 00:02:47.895 04:00:02 -- setup/hugepages.sh@170 -- # nodes_hp=() 00:02:47.895 04:00:02 -- setup/hugepages.sh@170 -- # local nodes_hp 00:02:47.895 04:00:02 -- setup/hugepages.sh@172 -- # local nr_hugepages=0 _nr_hugepages=0 00:02:47.895 04:00:02 -- setup/hugepages.sh@174 -- # get_test_nr_hugepages 1048576 00:02:47.895 04:00:02 -- setup/hugepages.sh@49 -- # local size=1048576 00:02:47.895 04:00:02 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:02:47.895 04:00:02 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:02:47.895 04:00:02 -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:02:47.895 04:00:02 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:02:47.895 04:00:02 -- setup/hugepages.sh@62 -- # user_nodes=() 00:02:47.895 04:00:02 -- setup/hugepages.sh@62 -- # local user_nodes 00:02:47.895 04:00:02 -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:02:47.895 04:00:02 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:02:47.895 04:00:02 -- setup/hugepages.sh@67 -- # nodes_test=() 00:02:47.895 04:00:02 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:02:47.895 04:00:02 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:02:47.895 04:00:02 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:02:47.895 04:00:02 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:02:47.895 04:00:02 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:02:47.895 04:00:02 -- setup/hugepages.sh@83 -- # : 256 00:02:47.895 04:00:02 -- setup/hugepages.sh@84 -- # : 1 00:02:47.895 04:00:02 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:02:47.895 04:00:02 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:02:47.895 04:00:02 -- setup/hugepages.sh@83 -- # : 0 00:02:47.895 04:00:02 -- setup/hugepages.sh@84 -- # : 0 00:02:47.895 04:00:02 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:02:47.895 04:00:02 -- setup/hugepages.sh@175 -- # nodes_hp[0]=512 00:02:47.895 04:00:02 -- setup/hugepages.sh@176 -- # (( 2 > 1 )) 00:02:47.895 04:00:02 -- setup/hugepages.sh@177 -- # get_test_nr_hugepages 2097152 00:02:47.895 04:00:02 -- setup/hugepages.sh@49 -- # local size=2097152 00:02:47.895 04:00:02 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:02:47.895 04:00:02 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:02:47.895 04:00:02 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:02:47.895 04:00:02 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:02:47.895 04:00:02 -- setup/hugepages.sh@62 -- # user_nodes=() 00:02:47.895 04:00:02 -- setup/hugepages.sh@62 -- # local user_nodes 00:02:47.895 04:00:02 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:02:47.895 04:00:02 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:02:47.895 04:00:02 -- setup/hugepages.sh@67 -- # nodes_test=() 00:02:47.895 04:00:02 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:02:47.895 04:00:02 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:02:47.895 04:00:02 -- setup/hugepages.sh@74 -- # (( 1 > 0 )) 00:02:47.895 04:00:02 -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:02:47.895 04:00:02 -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:02:47.895 04:00:02 -- setup/hugepages.sh@78 -- # return 0 00:02:47.895 04:00:02 -- setup/hugepages.sh@178 -- # nodes_hp[1]=1024 00:02:47.895 04:00:02 -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:02:47.895 04:00:02 -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:02:47.895 04:00:02 -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:02:47.895 04:00:02 -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:02:47.895 04:00:02 -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:02:47.895 04:00:02 -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:02:47.895 04:00:02 -- setup/hugepages.sh@186 -- # get_test_nr_hugepages_per_node 00:02:47.895 04:00:02 -- setup/hugepages.sh@62 -- # user_nodes=() 00:02:47.895 04:00:02 -- setup/hugepages.sh@62 -- # local user_nodes 00:02:47.895 04:00:02 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:02:47.895 04:00:02 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:02:47.895 04:00:02 -- setup/hugepages.sh@67 -- # nodes_test=() 00:02:47.895 04:00:02 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:02:47.895 04:00:02 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:02:47.895 04:00:02 -- setup/hugepages.sh@74 -- # (( 2 > 0 )) 00:02:47.895 04:00:02 -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:02:47.895 04:00:02 -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:02:47.895 04:00:02 -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:02:47.895 04:00:02 -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=1024 00:02:47.895 04:00:02 -- setup/hugepages.sh@78 -- # return 0 00:02:47.895 04:00:02 -- setup/hugepages.sh@187 -- # HUGENODE='nodes_hp[0]=512,nodes_hp[1]=1024' 00:02:47.895 04:00:02 -- setup/hugepages.sh@187 -- # setup output 00:02:47.895 04:00:02 -- setup/common.sh@9 -- # [[ output == output ]] 00:02:47.895 04:00:02 -- setup/common.sh@10 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/setup.sh 00:02:50.437 0000:74:02.0 (8086 0cfe): Already using the vfio-pci driver 00:02:50.437 0000:c9:00.0 (8086 0a54): Already using the vfio-pci driver 00:02:50.437 0000:f1:02.0 (8086 0cfe): Already using the vfio-pci driver 00:02:50.437 0000:79:02.0 (8086 0cfe): Already using the vfio-pci driver 00:02:50.437 0000:6f:01.0 (8086 0b25): Already using the vfio-pci driver 00:02:50.437 0000:6f:02.0 (8086 0cfe): Already using the vfio-pci driver 00:02:50.437 0000:f6:01.0 (8086 0b25): Already using the vfio-pci driver 00:02:50.437 0000:f6:02.0 (8086 0cfe): Already using the vfio-pci driver 00:02:50.437 0000:74:01.0 (8086 0b25): Already using the vfio-pci driver 00:02:50.437 0000:6a:02.0 (8086 0cfe): Already using the vfio-pci driver 00:02:50.437 0000:79:01.0 (8086 0b25): Already using the vfio-pci driver 00:02:50.437 0000:ec:01.0 (8086 0b25): Already using the vfio-pci driver 00:02:50.701 0000:6a:01.0 (8086 0b25): Already using the vfio-pci driver 00:02:50.701 0000:ec:02.0 (8086 0cfe): Already using the vfio-pci driver 00:02:50.701 0000:ca:00.0 (8086 0a54): Already using the vfio-pci driver 00:02:50.701 0000:e7:01.0 (8086 0b25): Already using the vfio-pci driver 00:02:50.701 0000:e7:02.0 (8086 0cfe): Already using the vfio-pci driver 00:02:50.701 0000:f1:01.0 (8086 0b25): Already using the vfio-pci driver 00:02:50.701 04:00:05 -- setup/hugepages.sh@188 -- # nr_hugepages=1536 00:02:50.701 04:00:05 -- setup/hugepages.sh@188 -- # verify_nr_hugepages 00:02:50.701 04:00:05 -- setup/hugepages.sh@89 -- # local node 00:02:50.701 04:00:05 -- setup/hugepages.sh@90 -- # local sorted_t 00:02:50.701 04:00:05 -- setup/hugepages.sh@91 -- # local sorted_s 00:02:50.701 04:00:05 -- setup/hugepages.sh@92 -- # local surp 00:02:50.701 04:00:05 -- setup/hugepages.sh@93 -- # local resv 00:02:50.701 04:00:05 -- setup/hugepages.sh@94 -- # local anon 00:02:50.701 04:00:05 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:02:50.701 04:00:05 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:02:50.701 04:00:05 -- setup/common.sh@17 -- # local get=AnonHugePages 00:02:50.701 04:00:05 -- setup/common.sh@18 -- # local node= 00:02:50.701 04:00:05 -- setup/common.sh@19 -- # local var val 00:02:50.701 04:00:05 -- setup/common.sh@20 -- # local mem_f mem 00:02:50.701 04:00:05 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:50.701 04:00:05 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:50.701 04:00:05 -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:50.701 04:00:05 -- setup/common.sh@28 -- # mapfile -t mem 00:02:50.701 04:00:05 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:50.701 04:00:05 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.701 04:00:05 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.701 04:00:05 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 258558476 kB' 'MemFree: 240544020 kB' 'MemAvailable: 244180864 kB' 'Buffers: 2696 kB' 'Cached: 10893436 kB' 'SwapCached: 0 kB' 'Active: 7039196 kB' 'Inactive: 4390304 kB' 'Active(anon): 6468380 kB' 'Inactive(anon): 0 kB' 'Active(file): 570816 kB' 'Inactive(file): 4390304 kB' 'Unevictable: 9000 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 542420 kB' 'Mapped: 211616 kB' 'Shmem: 5935012 kB' 'KReclaimable: 309552 kB' 'Slab: 955260 kB' 'SReclaimable: 309552 kB' 'SUnreclaim: 645708 kB' 'KernelStack: 24816 kB' 'PageTables: 9600 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 136094976 kB' 'Committed_AS: 8068760 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 329912 kB' 'VmallocChunk: 0 kB' 'Percpu: 104448 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 3262528 kB' 'DirectMap2M: 16437248 kB' 'DirectMap1G: 250609664 kB' 00:02:50.701 04:00:05 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:50.701 04:00:05 -- setup/common.sh@32 -- # continue 00:02:50.701 04:00:05 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.701 04:00:05 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.701 04:00:05 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:50.701 04:00:05 -- setup/common.sh@32 -- # continue 00:02:50.701 04:00:05 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.701 04:00:05 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.701 04:00:05 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:50.701 04:00:05 -- setup/common.sh@32 -- # continue 00:02:50.701 04:00:05 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.701 04:00:05 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.701 04:00:05 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:50.701 04:00:05 -- setup/common.sh@32 -- # continue 00:02:50.701 04:00:05 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.701 04:00:05 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.701 04:00:05 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:50.701 04:00:05 -- setup/common.sh@32 -- # continue 00:02:50.701 04:00:05 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.701 04:00:05 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.701 04:00:05 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:50.701 04:00:05 -- setup/common.sh@32 -- # continue 00:02:50.701 04:00:05 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.701 04:00:05 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.701 04:00:05 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:50.701 04:00:05 -- setup/common.sh@32 -- # continue 00:02:50.701 04:00:05 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.701 04:00:05 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.701 04:00:05 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:50.701 04:00:05 -- setup/common.sh@32 -- # continue 00:02:50.701 04:00:05 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.701 04:00:05 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.701 04:00:05 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:50.701 04:00:05 -- setup/common.sh@32 -- # continue 00:02:50.701 04:00:05 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.701 04:00:05 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.701 04:00:05 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:50.701 04:00:05 -- setup/common.sh@32 -- # continue 00:02:50.701 04:00:05 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.701 04:00:05 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.701 04:00:05 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:50.701 04:00:05 -- setup/common.sh@32 -- # continue 00:02:50.701 04:00:05 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.701 04:00:05 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.701 04:00:05 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:50.701 04:00:05 -- setup/common.sh@32 -- # continue 00:02:50.701 04:00:05 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.701 04:00:05 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.701 04:00:05 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:50.701 04:00:05 -- setup/common.sh@32 -- # continue 00:02:50.701 04:00:05 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.701 04:00:05 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.701 04:00:05 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:50.701 04:00:05 -- setup/common.sh@32 -- # continue 00:02:50.701 04:00:05 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.701 04:00:05 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.701 04:00:05 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:50.701 04:00:05 -- setup/common.sh@32 -- # continue 00:02:50.701 04:00:05 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.701 04:00:05 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.701 04:00:05 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:50.701 04:00:05 -- setup/common.sh@32 -- # continue 00:02:50.701 04:00:05 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.701 04:00:05 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.701 04:00:05 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:50.701 04:00:05 -- setup/common.sh@32 -- # continue 00:02:50.701 04:00:05 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.701 04:00:05 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.701 04:00:05 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:50.701 04:00:05 -- setup/common.sh@32 -- # continue 00:02:50.701 04:00:05 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.701 04:00:05 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.701 04:00:05 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:50.701 04:00:05 -- setup/common.sh@32 -- # continue 00:02:50.701 04:00:05 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.701 04:00:05 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.701 04:00:05 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:50.701 04:00:05 -- setup/common.sh@32 -- # continue 00:02:50.701 04:00:05 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.701 04:00:05 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.701 04:00:05 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:50.701 04:00:05 -- setup/common.sh@32 -- # continue 00:02:50.701 04:00:05 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.701 04:00:05 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.701 04:00:05 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:50.701 04:00:05 -- setup/common.sh@32 -- # continue 00:02:50.701 04:00:05 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.701 04:00:05 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.701 04:00:05 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:50.701 04:00:05 -- setup/common.sh@32 -- # continue 00:02:50.701 04:00:05 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.701 04:00:05 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.701 04:00:05 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:50.701 04:00:05 -- setup/common.sh@32 -- # continue 00:02:50.701 04:00:05 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.701 04:00:05 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.701 04:00:05 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:50.701 04:00:05 -- setup/common.sh@32 -- # continue 00:02:50.702 04:00:05 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.702 04:00:05 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.702 04:00:05 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:50.702 04:00:05 -- setup/common.sh@32 -- # continue 00:02:50.702 04:00:05 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.702 04:00:05 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.702 04:00:05 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:50.702 04:00:05 -- setup/common.sh@32 -- # continue 00:02:50.702 04:00:05 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.702 04:00:05 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.702 04:00:05 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:50.702 04:00:05 -- setup/common.sh@32 -- # continue 00:02:50.702 04:00:05 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.702 04:00:05 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.702 04:00:05 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:50.702 04:00:05 -- setup/common.sh@32 -- # continue 00:02:50.702 04:00:05 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.702 04:00:05 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.702 04:00:05 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:50.702 04:00:05 -- setup/common.sh@32 -- # continue 00:02:50.702 04:00:05 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.702 04:00:05 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.702 04:00:05 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:50.702 04:00:05 -- setup/common.sh@32 -- # continue 00:02:50.702 04:00:05 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.702 04:00:05 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.702 04:00:05 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:50.702 04:00:05 -- setup/common.sh@32 -- # continue 00:02:50.702 04:00:05 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.702 04:00:05 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.702 04:00:05 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:50.702 04:00:05 -- setup/common.sh@32 -- # continue 00:02:50.702 04:00:05 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.702 04:00:05 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.702 04:00:05 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:50.702 04:00:05 -- setup/common.sh@32 -- # continue 00:02:50.702 04:00:05 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.702 04:00:05 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.702 04:00:05 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:50.702 04:00:05 -- setup/common.sh@32 -- # continue 00:02:50.702 04:00:05 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.702 04:00:05 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.702 04:00:05 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:50.702 04:00:05 -- setup/common.sh@32 -- # continue 00:02:50.702 04:00:05 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.702 04:00:05 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.702 04:00:05 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:50.702 04:00:05 -- setup/common.sh@32 -- # continue 00:02:50.702 04:00:05 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.702 04:00:05 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.702 04:00:05 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:50.702 04:00:05 -- setup/common.sh@32 -- # continue 00:02:50.702 04:00:05 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.702 04:00:05 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.702 04:00:05 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:50.702 04:00:05 -- setup/common.sh@32 -- # continue 00:02:50.702 04:00:05 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.702 04:00:05 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.702 04:00:05 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:50.702 04:00:05 -- setup/common.sh@32 -- # continue 00:02:50.702 04:00:05 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.702 04:00:05 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.702 04:00:05 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:50.702 04:00:05 -- setup/common.sh@33 -- # echo 0 00:02:50.702 04:00:05 -- setup/common.sh@33 -- # return 0 00:02:50.702 04:00:05 -- setup/hugepages.sh@97 -- # anon=0 00:02:50.702 04:00:05 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:02:50.702 04:00:05 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:50.702 04:00:05 -- setup/common.sh@18 -- # local node= 00:02:50.702 04:00:05 -- setup/common.sh@19 -- # local var val 00:02:50.702 04:00:05 -- setup/common.sh@20 -- # local mem_f mem 00:02:50.702 04:00:05 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:50.702 04:00:05 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:50.702 04:00:05 -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:50.702 04:00:05 -- setup/common.sh@28 -- # mapfile -t mem 00:02:50.702 04:00:05 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:50.702 04:00:05 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.702 04:00:05 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.702 04:00:05 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 258558476 kB' 'MemFree: 240546332 kB' 'MemAvailable: 244183176 kB' 'Buffers: 2696 kB' 'Cached: 10893436 kB' 'SwapCached: 0 kB' 'Active: 7040468 kB' 'Inactive: 4390304 kB' 'Active(anon): 6469652 kB' 'Inactive(anon): 0 kB' 'Active(file): 570816 kB' 'Inactive(file): 4390304 kB' 'Unevictable: 9000 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 543336 kB' 'Mapped: 211692 kB' 'Shmem: 5935012 kB' 'KReclaimable: 309552 kB' 'Slab: 955412 kB' 'SReclaimable: 309552 kB' 'SUnreclaim: 645860 kB' 'KernelStack: 24848 kB' 'PageTables: 9732 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 136094976 kB' 'Committed_AS: 8068772 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 329896 kB' 'VmallocChunk: 0 kB' 'Percpu: 104448 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 3262528 kB' 'DirectMap2M: 16437248 kB' 'DirectMap1G: 250609664 kB' 00:02:50.702 04:00:05 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.702 04:00:05 -- setup/common.sh@32 -- # continue 00:02:50.702 04:00:05 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.702 04:00:05 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.702 04:00:05 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.702 04:00:05 -- setup/common.sh@32 -- # continue 00:02:50.702 04:00:05 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.702 04:00:05 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.702 04:00:05 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.702 04:00:05 -- setup/common.sh@32 -- # continue 00:02:50.702 04:00:05 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.702 04:00:05 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.702 04:00:05 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.702 04:00:05 -- setup/common.sh@32 -- # continue 00:02:50.702 04:00:05 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.702 04:00:05 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.702 04:00:05 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.702 04:00:05 -- setup/common.sh@32 -- # continue 00:02:50.702 04:00:05 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.702 04:00:05 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.702 04:00:05 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.702 04:00:05 -- setup/common.sh@32 -- # continue 00:02:50.702 04:00:05 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.702 04:00:05 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.702 04:00:05 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.702 04:00:05 -- setup/common.sh@32 -- # continue 00:02:50.702 04:00:05 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.702 04:00:05 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.702 04:00:05 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.702 04:00:05 -- setup/common.sh@32 -- # continue 00:02:50.702 04:00:05 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.702 04:00:05 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.702 04:00:05 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.702 04:00:05 -- setup/common.sh@32 -- # continue 00:02:50.702 04:00:05 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.702 04:00:05 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.702 04:00:05 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.702 04:00:05 -- setup/common.sh@32 -- # continue 00:02:50.702 04:00:05 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.702 04:00:05 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.702 04:00:05 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.702 04:00:05 -- setup/common.sh@32 -- # continue 00:02:50.702 04:00:05 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.702 04:00:05 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.702 04:00:05 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.702 04:00:05 -- setup/common.sh@32 -- # continue 00:02:50.702 04:00:05 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.702 04:00:05 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.702 04:00:05 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.702 04:00:05 -- setup/common.sh@32 -- # continue 00:02:50.702 04:00:05 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.702 04:00:05 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.702 04:00:05 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.702 04:00:05 -- setup/common.sh@32 -- # continue 00:02:50.703 04:00:05 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.703 04:00:05 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.703 04:00:05 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.703 04:00:05 -- setup/common.sh@32 -- # continue 00:02:50.703 04:00:05 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.703 04:00:05 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.703 04:00:05 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.703 04:00:05 -- setup/common.sh@32 -- # continue 00:02:50.703 04:00:05 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.703 04:00:05 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.703 04:00:05 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.703 04:00:05 -- setup/common.sh@32 -- # continue 00:02:50.703 04:00:05 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.703 04:00:05 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.703 04:00:05 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.703 04:00:05 -- setup/common.sh@32 -- # continue 00:02:50.703 04:00:05 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.703 04:00:05 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.703 04:00:05 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.703 04:00:05 -- setup/common.sh@32 -- # continue 00:02:50.703 04:00:05 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.703 04:00:05 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.703 04:00:05 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.703 04:00:05 -- setup/common.sh@32 -- # continue 00:02:50.703 04:00:05 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.703 04:00:05 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.703 04:00:05 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.703 04:00:05 -- setup/common.sh@32 -- # continue 00:02:50.703 04:00:05 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.703 04:00:05 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.703 04:00:05 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.703 04:00:05 -- setup/common.sh@32 -- # continue 00:02:50.703 04:00:05 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.703 04:00:05 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.703 04:00:05 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.703 04:00:05 -- setup/common.sh@32 -- # continue 00:02:50.703 04:00:05 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.703 04:00:05 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.703 04:00:05 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.703 04:00:05 -- setup/common.sh@32 -- # continue 00:02:50.703 04:00:05 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.703 04:00:05 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.703 04:00:05 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.703 04:00:05 -- setup/common.sh@32 -- # continue 00:02:50.703 04:00:05 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.703 04:00:05 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.703 04:00:05 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.703 04:00:05 -- setup/common.sh@32 -- # continue 00:02:50.703 04:00:05 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.703 04:00:05 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.703 04:00:05 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.703 04:00:05 -- setup/common.sh@32 -- # continue 00:02:50.703 04:00:05 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.703 04:00:05 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.703 04:00:05 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.703 04:00:05 -- setup/common.sh@32 -- # continue 00:02:50.703 04:00:05 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.703 04:00:05 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.703 04:00:05 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.703 04:00:05 -- setup/common.sh@32 -- # continue 00:02:50.703 04:00:05 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.703 04:00:05 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.703 04:00:05 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.703 04:00:05 -- setup/common.sh@32 -- # continue 00:02:50.703 04:00:05 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.703 04:00:05 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.703 04:00:05 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.703 04:00:05 -- setup/common.sh@32 -- # continue 00:02:50.703 04:00:05 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.703 04:00:05 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.703 04:00:05 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.703 04:00:05 -- setup/common.sh@32 -- # continue 00:02:50.703 04:00:05 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.703 04:00:05 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.703 04:00:05 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.703 04:00:05 -- setup/common.sh@32 -- # continue 00:02:50.703 04:00:05 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.703 04:00:05 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.703 04:00:05 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.703 04:00:05 -- setup/common.sh@32 -- # continue 00:02:50.703 04:00:05 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.703 04:00:05 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.703 04:00:05 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.703 04:00:05 -- setup/common.sh@32 -- # continue 00:02:50.703 04:00:05 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.703 04:00:05 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.703 04:00:05 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.703 04:00:05 -- setup/common.sh@32 -- # continue 00:02:50.703 04:00:05 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.703 04:00:05 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.703 04:00:05 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.703 04:00:05 -- setup/common.sh@32 -- # continue 00:02:50.703 04:00:05 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.703 04:00:05 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.703 04:00:05 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.703 04:00:05 -- setup/common.sh@32 -- # continue 00:02:50.703 04:00:05 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.703 04:00:05 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.703 04:00:05 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.703 04:00:05 -- setup/common.sh@32 -- # continue 00:02:50.703 04:00:05 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.703 04:00:05 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.703 04:00:05 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.703 04:00:05 -- setup/common.sh@32 -- # continue 00:02:50.703 04:00:05 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.703 04:00:05 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.703 04:00:05 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.703 04:00:05 -- setup/common.sh@32 -- # continue 00:02:50.703 04:00:05 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.703 04:00:05 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.703 04:00:05 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.703 04:00:05 -- setup/common.sh@32 -- # continue 00:02:50.703 04:00:05 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.703 04:00:05 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.703 04:00:05 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.703 04:00:05 -- setup/common.sh@32 -- # continue 00:02:50.703 04:00:05 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.703 04:00:05 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.703 04:00:05 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.703 04:00:05 -- setup/common.sh@32 -- # continue 00:02:50.703 04:00:05 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.703 04:00:05 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.703 04:00:05 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.703 04:00:05 -- setup/common.sh@32 -- # continue 00:02:50.703 04:00:05 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.703 04:00:05 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.703 04:00:05 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.703 04:00:05 -- setup/common.sh@32 -- # continue 00:02:50.703 04:00:05 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.703 04:00:05 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.703 04:00:05 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.703 04:00:05 -- setup/common.sh@32 -- # continue 00:02:50.703 04:00:05 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.703 04:00:05 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.703 04:00:05 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.703 04:00:05 -- setup/common.sh@32 -- # continue 00:02:50.703 04:00:05 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.703 04:00:05 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.703 04:00:05 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.703 04:00:05 -- setup/common.sh@32 -- # continue 00:02:50.703 04:00:05 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.703 04:00:05 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.703 04:00:05 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.703 04:00:05 -- setup/common.sh@32 -- # continue 00:02:50.703 04:00:05 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.703 04:00:05 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.703 04:00:05 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.703 04:00:05 -- setup/common.sh@32 -- # continue 00:02:50.703 04:00:05 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.703 04:00:05 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.703 04:00:05 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.703 04:00:05 -- setup/common.sh@33 -- # echo 0 00:02:50.703 04:00:05 -- setup/common.sh@33 -- # return 0 00:02:50.703 04:00:05 -- setup/hugepages.sh@99 -- # surp=0 00:02:50.703 04:00:05 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:02:50.703 04:00:05 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:02:50.703 04:00:05 -- setup/common.sh@18 -- # local node= 00:02:50.703 04:00:05 -- setup/common.sh@19 -- # local var val 00:02:50.703 04:00:05 -- setup/common.sh@20 -- # local mem_f mem 00:02:50.703 04:00:05 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:50.703 04:00:05 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:50.703 04:00:05 -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:50.703 04:00:05 -- setup/common.sh@28 -- # mapfile -t mem 00:02:50.703 04:00:05 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:50.703 04:00:05 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.703 04:00:05 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.704 04:00:05 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 258558476 kB' 'MemFree: 240546156 kB' 'MemAvailable: 244183000 kB' 'Buffers: 2696 kB' 'Cached: 10893436 kB' 'SwapCached: 0 kB' 'Active: 7039736 kB' 'Inactive: 4390304 kB' 'Active(anon): 6468920 kB' 'Inactive(anon): 0 kB' 'Active(file): 570816 kB' 'Inactive(file): 4390304 kB' 'Unevictable: 9000 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 542576 kB' 'Mapped: 211688 kB' 'Shmem: 5935012 kB' 'KReclaimable: 309552 kB' 'Slab: 955412 kB' 'SReclaimable: 309552 kB' 'SUnreclaim: 645860 kB' 'KernelStack: 24864 kB' 'PageTables: 9120 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 136094976 kB' 'Committed_AS: 8068784 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 329896 kB' 'VmallocChunk: 0 kB' 'Percpu: 104448 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 3262528 kB' 'DirectMap2M: 16437248 kB' 'DirectMap1G: 250609664 kB' 00:02:50.704 04:00:05 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:50.704 04:00:05 -- setup/common.sh@32 -- # continue 00:02:50.704 04:00:05 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.704 04:00:05 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.704 04:00:05 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:50.704 04:00:05 -- setup/common.sh@32 -- # continue 00:02:50.704 04:00:05 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.704 04:00:05 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.704 04:00:05 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:50.704 04:00:05 -- setup/common.sh@32 -- # continue 00:02:50.704 04:00:05 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.704 04:00:05 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.704 04:00:05 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:50.704 04:00:05 -- setup/common.sh@32 -- # continue 00:02:50.704 04:00:05 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.704 04:00:05 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.704 04:00:05 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:50.704 04:00:05 -- setup/common.sh@32 -- # continue 00:02:50.704 04:00:05 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.704 04:00:05 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.704 04:00:05 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:50.704 04:00:05 -- setup/common.sh@32 -- # continue 00:02:50.704 04:00:05 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.704 04:00:05 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.704 04:00:05 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:50.704 04:00:05 -- setup/common.sh@32 -- # continue 00:02:50.704 04:00:05 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.704 04:00:05 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.704 04:00:05 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:50.704 04:00:05 -- setup/common.sh@32 -- # continue 00:02:50.704 04:00:05 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.704 04:00:05 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.704 04:00:05 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:50.704 04:00:05 -- setup/common.sh@32 -- # continue 00:02:50.704 04:00:05 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.704 04:00:05 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.704 04:00:05 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:50.704 04:00:05 -- setup/common.sh@32 -- # continue 00:02:50.704 04:00:05 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.704 04:00:05 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.704 04:00:05 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:50.704 04:00:05 -- setup/common.sh@32 -- # continue 00:02:50.704 04:00:05 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.704 04:00:05 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.704 04:00:05 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:50.704 04:00:05 -- setup/common.sh@32 -- # continue 00:02:50.704 04:00:05 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.704 04:00:05 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.704 04:00:05 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:50.704 04:00:05 -- setup/common.sh@32 -- # continue 00:02:50.704 04:00:05 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.704 04:00:05 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.704 04:00:05 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:50.704 04:00:05 -- setup/common.sh@32 -- # continue 00:02:50.704 04:00:05 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.704 04:00:05 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.704 04:00:05 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:50.704 04:00:05 -- setup/common.sh@32 -- # continue 00:02:50.704 04:00:05 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.704 04:00:05 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.704 04:00:05 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:50.704 04:00:05 -- setup/common.sh@32 -- # continue 00:02:50.704 04:00:05 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.704 04:00:05 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.704 04:00:05 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:50.704 04:00:05 -- setup/common.sh@32 -- # continue 00:02:50.704 04:00:05 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.704 04:00:05 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.704 04:00:05 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:50.704 04:00:05 -- setup/common.sh@32 -- # continue 00:02:50.704 04:00:05 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.704 04:00:05 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.704 04:00:05 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:50.704 04:00:05 -- setup/common.sh@32 -- # continue 00:02:50.704 04:00:05 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.704 04:00:05 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.704 04:00:05 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:50.704 04:00:05 -- setup/common.sh@32 -- # continue 00:02:50.704 04:00:05 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.704 04:00:05 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.704 04:00:05 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:50.704 04:00:05 -- setup/common.sh@32 -- # continue 00:02:50.704 04:00:05 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.704 04:00:05 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.704 04:00:05 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:50.704 04:00:05 -- setup/common.sh@32 -- # continue 00:02:50.704 04:00:05 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.704 04:00:05 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.704 04:00:05 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:50.704 04:00:05 -- setup/common.sh@32 -- # continue 00:02:50.704 04:00:05 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.704 04:00:05 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.704 04:00:05 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:50.704 04:00:05 -- setup/common.sh@32 -- # continue 00:02:50.704 04:00:05 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.704 04:00:05 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.704 04:00:05 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:50.704 04:00:05 -- setup/common.sh@32 -- # continue 00:02:50.704 04:00:05 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.704 04:00:05 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.704 04:00:05 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:50.704 04:00:05 -- setup/common.sh@32 -- # continue 00:02:50.704 04:00:05 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.704 04:00:05 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.704 04:00:05 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:50.704 04:00:05 -- setup/common.sh@32 -- # continue 00:02:50.704 04:00:05 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.704 04:00:05 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.704 04:00:05 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:50.704 04:00:05 -- setup/common.sh@32 -- # continue 00:02:50.704 04:00:05 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.704 04:00:05 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.704 04:00:05 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:50.704 04:00:05 -- setup/common.sh@32 -- # continue 00:02:50.704 04:00:05 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.704 04:00:05 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.704 04:00:05 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:50.704 04:00:05 -- setup/common.sh@32 -- # continue 00:02:50.704 04:00:05 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.704 04:00:05 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.704 04:00:05 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:50.704 04:00:05 -- setup/common.sh@32 -- # continue 00:02:50.704 04:00:05 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.704 04:00:05 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.704 04:00:05 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:50.704 04:00:05 -- setup/common.sh@32 -- # continue 00:02:50.704 04:00:05 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.704 04:00:05 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.704 04:00:05 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:50.704 04:00:05 -- setup/common.sh@32 -- # continue 00:02:50.704 04:00:05 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.704 04:00:05 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.704 04:00:05 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:50.705 04:00:05 -- setup/common.sh@32 -- # continue 00:02:50.705 04:00:05 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.705 04:00:05 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.705 04:00:05 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:50.705 04:00:05 -- setup/common.sh@32 -- # continue 00:02:50.705 04:00:05 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.705 04:00:05 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.705 04:00:05 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:50.705 04:00:05 -- setup/common.sh@32 -- # continue 00:02:50.705 04:00:05 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.705 04:00:05 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.705 04:00:05 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:50.705 04:00:05 -- setup/common.sh@32 -- # continue 00:02:50.705 04:00:05 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.705 04:00:05 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.705 04:00:05 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:50.705 04:00:05 -- setup/common.sh@32 -- # continue 00:02:50.705 04:00:05 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.705 04:00:05 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.705 04:00:05 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:50.705 04:00:05 -- setup/common.sh@32 -- # continue 00:02:50.705 04:00:05 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.705 04:00:05 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.705 04:00:05 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:50.705 04:00:05 -- setup/common.sh@32 -- # continue 00:02:50.705 04:00:05 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.705 04:00:05 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.705 04:00:05 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:50.705 04:00:05 -- setup/common.sh@32 -- # continue 00:02:50.705 04:00:05 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.705 04:00:05 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.705 04:00:05 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:50.705 04:00:05 -- setup/common.sh@32 -- # continue 00:02:50.705 04:00:05 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.705 04:00:05 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.705 04:00:05 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:50.705 04:00:05 -- setup/common.sh@32 -- # continue 00:02:50.705 04:00:05 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.705 04:00:05 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.705 04:00:05 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:50.705 04:00:05 -- setup/common.sh@32 -- # continue 00:02:50.705 04:00:05 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.705 04:00:05 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.705 04:00:05 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:50.705 04:00:05 -- setup/common.sh@32 -- # continue 00:02:50.705 04:00:05 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.705 04:00:05 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.705 04:00:05 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:50.705 04:00:05 -- setup/common.sh@32 -- # continue 00:02:50.705 04:00:05 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.705 04:00:05 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.705 04:00:05 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:50.705 04:00:05 -- setup/common.sh@32 -- # continue 00:02:50.705 04:00:05 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.705 04:00:05 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.705 04:00:05 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:50.705 04:00:05 -- setup/common.sh@32 -- # continue 00:02:50.705 04:00:05 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.705 04:00:05 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.705 04:00:05 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:50.705 04:00:05 -- setup/common.sh@32 -- # continue 00:02:50.705 04:00:05 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.705 04:00:05 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.705 04:00:05 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:50.705 04:00:05 -- setup/common.sh@32 -- # continue 00:02:50.705 04:00:05 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.705 04:00:05 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.705 04:00:05 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:50.705 04:00:05 -- setup/common.sh@33 -- # echo 0 00:02:50.705 04:00:05 -- setup/common.sh@33 -- # return 0 00:02:50.705 04:00:05 -- setup/hugepages.sh@100 -- # resv=0 00:02:50.705 04:00:05 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1536 00:02:50.705 nr_hugepages=1536 00:02:50.705 04:00:05 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:02:50.705 resv_hugepages=0 00:02:50.705 04:00:05 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:02:50.705 surplus_hugepages=0 00:02:50.705 04:00:05 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:02:50.705 anon_hugepages=0 00:02:50.705 04:00:05 -- setup/hugepages.sh@107 -- # (( 1536 == nr_hugepages + surp + resv )) 00:02:50.705 04:00:05 -- setup/hugepages.sh@109 -- # (( 1536 == nr_hugepages )) 00:02:50.705 04:00:05 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:02:50.705 04:00:05 -- setup/common.sh@17 -- # local get=HugePages_Total 00:02:50.705 04:00:05 -- setup/common.sh@18 -- # local node= 00:02:50.705 04:00:05 -- setup/common.sh@19 -- # local var val 00:02:50.705 04:00:05 -- setup/common.sh@20 -- # local mem_f mem 00:02:50.705 04:00:05 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:50.705 04:00:05 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:50.705 04:00:05 -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:50.705 04:00:05 -- setup/common.sh@28 -- # mapfile -t mem 00:02:50.705 04:00:05 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:50.705 04:00:05 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.705 04:00:05 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.705 04:00:05 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 258558476 kB' 'MemFree: 240546972 kB' 'MemAvailable: 244183816 kB' 'Buffers: 2696 kB' 'Cached: 10893440 kB' 'SwapCached: 0 kB' 'Active: 7039932 kB' 'Inactive: 4390304 kB' 'Active(anon): 6469116 kB' 'Inactive(anon): 0 kB' 'Active(file): 570816 kB' 'Inactive(file): 4390304 kB' 'Unevictable: 9000 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 543372 kB' 'Mapped: 211612 kB' 'Shmem: 5935016 kB' 'KReclaimable: 309552 kB' 'Slab: 955468 kB' 'SReclaimable: 309552 kB' 'SUnreclaim: 645916 kB' 'KernelStack: 24864 kB' 'PageTables: 9140 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 136094976 kB' 'Committed_AS: 8068800 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 329896 kB' 'VmallocChunk: 0 kB' 'Percpu: 104448 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 3262528 kB' 'DirectMap2M: 16437248 kB' 'DirectMap1G: 250609664 kB' 00:02:50.967 04:00:05 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:50.967 04:00:05 -- setup/common.sh@32 -- # continue 00:02:50.967 04:00:05 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.967 04:00:05 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.967 04:00:05 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:50.967 04:00:05 -- setup/common.sh@32 -- # continue 00:02:50.967 04:00:05 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.967 04:00:05 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.967 04:00:05 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:50.967 04:00:05 -- setup/common.sh@32 -- # continue 00:02:50.967 04:00:05 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.967 04:00:05 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.967 04:00:05 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:50.967 04:00:05 -- setup/common.sh@32 -- # continue 00:02:50.967 04:00:05 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.967 04:00:05 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.967 04:00:05 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:50.967 04:00:05 -- setup/common.sh@32 -- # continue 00:02:50.967 04:00:05 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.967 04:00:05 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.967 04:00:05 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:50.967 04:00:05 -- setup/common.sh@32 -- # continue 00:02:50.967 04:00:05 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.967 04:00:05 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.967 04:00:05 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:50.967 04:00:05 -- setup/common.sh@32 -- # continue 00:02:50.967 04:00:05 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.967 04:00:05 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.967 04:00:05 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:50.967 04:00:05 -- setup/common.sh@32 -- # continue 00:02:50.967 04:00:05 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.967 04:00:05 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.967 04:00:05 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:50.967 04:00:05 -- setup/common.sh@32 -- # continue 00:02:50.967 04:00:05 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.967 04:00:05 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.967 04:00:05 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:50.967 04:00:05 -- setup/common.sh@32 -- # continue 00:02:50.967 04:00:05 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.967 04:00:05 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.967 04:00:05 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:50.967 04:00:05 -- setup/common.sh@32 -- # continue 00:02:50.967 04:00:05 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.967 04:00:05 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.967 04:00:05 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:50.967 04:00:05 -- setup/common.sh@32 -- # continue 00:02:50.967 04:00:05 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.967 04:00:05 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.967 04:00:05 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:50.967 04:00:05 -- setup/common.sh@32 -- # continue 00:02:50.967 04:00:05 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.967 04:00:05 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.967 04:00:05 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:50.967 04:00:05 -- setup/common.sh@32 -- # continue 00:02:50.967 04:00:05 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.967 04:00:05 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.967 04:00:05 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:50.967 04:00:05 -- setup/common.sh@32 -- # continue 00:02:50.967 04:00:05 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.967 04:00:05 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.967 04:00:05 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:50.967 04:00:05 -- setup/common.sh@32 -- # continue 00:02:50.967 04:00:05 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.967 04:00:05 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.967 04:00:05 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:50.967 04:00:05 -- setup/common.sh@32 -- # continue 00:02:50.967 04:00:05 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.967 04:00:05 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.967 04:00:05 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:50.967 04:00:05 -- setup/common.sh@32 -- # continue 00:02:50.967 04:00:05 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.967 04:00:05 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.967 04:00:05 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:50.967 04:00:05 -- setup/common.sh@32 -- # continue 00:02:50.967 04:00:05 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.967 04:00:05 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.967 04:00:05 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:50.967 04:00:05 -- setup/common.sh@32 -- # continue 00:02:50.967 04:00:05 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.967 04:00:05 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.967 04:00:05 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:50.967 04:00:05 -- setup/common.sh@32 -- # continue 00:02:50.967 04:00:05 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.967 04:00:05 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.967 04:00:05 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:50.967 04:00:05 -- setup/common.sh@32 -- # continue 00:02:50.967 04:00:05 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.967 04:00:05 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.967 04:00:05 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:50.967 04:00:05 -- setup/common.sh@32 -- # continue 00:02:50.967 04:00:05 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.967 04:00:05 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.967 04:00:05 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:50.967 04:00:05 -- setup/common.sh@32 -- # continue 00:02:50.967 04:00:05 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.967 04:00:05 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.967 04:00:05 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:50.967 04:00:05 -- setup/common.sh@32 -- # continue 00:02:50.967 04:00:05 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.967 04:00:05 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.967 04:00:05 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:50.967 04:00:05 -- setup/common.sh@32 -- # continue 00:02:50.967 04:00:05 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.967 04:00:05 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.967 04:00:05 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:50.967 04:00:05 -- setup/common.sh@32 -- # continue 00:02:50.967 04:00:05 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.967 04:00:05 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.967 04:00:05 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:50.967 04:00:05 -- setup/common.sh@32 -- # continue 00:02:50.967 04:00:05 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.967 04:00:05 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.967 04:00:05 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:50.967 04:00:05 -- setup/common.sh@32 -- # continue 00:02:50.967 04:00:05 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.967 04:00:05 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.967 04:00:05 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:50.967 04:00:05 -- setup/common.sh@32 -- # continue 00:02:50.967 04:00:05 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.967 04:00:05 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.967 04:00:05 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:50.967 04:00:05 -- setup/common.sh@32 -- # continue 00:02:50.967 04:00:05 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.967 04:00:05 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.967 04:00:05 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:50.967 04:00:05 -- setup/common.sh@32 -- # continue 00:02:50.967 04:00:05 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.967 04:00:05 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.967 04:00:05 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:50.967 04:00:05 -- setup/common.sh@32 -- # continue 00:02:50.967 04:00:05 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.967 04:00:05 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.967 04:00:05 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:50.967 04:00:05 -- setup/common.sh@32 -- # continue 00:02:50.967 04:00:05 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.967 04:00:05 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.967 04:00:05 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:50.967 04:00:05 -- setup/common.sh@32 -- # continue 00:02:50.967 04:00:05 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.967 04:00:05 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.967 04:00:05 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:50.967 04:00:05 -- setup/common.sh@32 -- # continue 00:02:50.967 04:00:05 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.967 04:00:05 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.967 04:00:05 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:50.967 04:00:05 -- setup/common.sh@32 -- # continue 00:02:50.967 04:00:05 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.967 04:00:05 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.967 04:00:05 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:50.967 04:00:05 -- setup/common.sh@32 -- # continue 00:02:50.967 04:00:05 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.967 04:00:05 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.967 04:00:05 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:50.967 04:00:05 -- setup/common.sh@32 -- # continue 00:02:50.967 04:00:05 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.967 04:00:05 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.967 04:00:05 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:50.967 04:00:05 -- setup/common.sh@32 -- # continue 00:02:50.967 04:00:05 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.967 04:00:05 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.967 04:00:05 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:50.967 04:00:05 -- setup/common.sh@32 -- # continue 00:02:50.967 04:00:05 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.967 04:00:05 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.967 04:00:05 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:50.967 04:00:05 -- setup/common.sh@32 -- # continue 00:02:50.967 04:00:05 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.967 04:00:05 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.967 04:00:05 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:50.967 04:00:05 -- setup/common.sh@32 -- # continue 00:02:50.967 04:00:05 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.967 04:00:05 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.967 04:00:05 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:50.967 04:00:05 -- setup/common.sh@32 -- # continue 00:02:50.967 04:00:05 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.967 04:00:05 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.967 04:00:05 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:50.967 04:00:05 -- setup/common.sh@32 -- # continue 00:02:50.967 04:00:05 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.967 04:00:05 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.967 04:00:05 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:50.967 04:00:05 -- setup/common.sh@32 -- # continue 00:02:50.967 04:00:05 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.967 04:00:05 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.967 04:00:05 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:50.967 04:00:05 -- setup/common.sh@32 -- # continue 00:02:50.967 04:00:05 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.967 04:00:05 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.967 04:00:05 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:50.968 04:00:05 -- setup/common.sh@32 -- # continue 00:02:50.968 04:00:05 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.968 04:00:05 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.968 04:00:05 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:50.968 04:00:05 -- setup/common.sh@33 -- # echo 1536 00:02:50.968 04:00:05 -- setup/common.sh@33 -- # return 0 00:02:50.968 04:00:05 -- setup/hugepages.sh@110 -- # (( 1536 == nr_hugepages + surp + resv )) 00:02:50.968 04:00:05 -- setup/hugepages.sh@112 -- # get_nodes 00:02:50.968 04:00:05 -- setup/hugepages.sh@27 -- # local node 00:02:50.968 04:00:05 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:50.968 04:00:05 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:02:50.968 04:00:05 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:50.968 04:00:05 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:02:50.968 04:00:05 -- setup/hugepages.sh@32 -- # no_nodes=2 00:02:50.968 04:00:05 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:02:50.968 04:00:05 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:02:50.968 04:00:05 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:02:50.968 04:00:05 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:02:50.968 04:00:05 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:50.968 04:00:05 -- setup/common.sh@18 -- # local node=0 00:02:50.968 04:00:05 -- setup/common.sh@19 -- # local var val 00:02:50.968 04:00:05 -- setup/common.sh@20 -- # local mem_f mem 00:02:50.968 04:00:05 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:50.968 04:00:05 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:02:50.968 04:00:05 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:02:50.968 04:00:05 -- setup/common.sh@28 -- # mapfile -t mem 00:02:50.968 04:00:05 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:50.968 04:00:05 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.968 04:00:05 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.968 04:00:05 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 131816228 kB' 'MemFree: 119491064 kB' 'MemUsed: 12325164 kB' 'SwapCached: 0 kB' 'Active: 5632240 kB' 'Inactive: 3992136 kB' 'Active(anon): 5219364 kB' 'Inactive(anon): 0 kB' 'Active(file): 412876 kB' 'Inactive(file): 3992136 kB' 'Unevictable: 9000 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 9295972 kB' 'Mapped: 162744 kB' 'AnonPages: 338048 kB' 'Shmem: 4890960 kB' 'KernelStack: 14024 kB' 'PageTables: 7248 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 211376 kB' 'Slab: 549332 kB' 'SReclaimable: 211376 kB' 'SUnreclaim: 337956 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:02:50.968 04:00:05 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.968 04:00:05 -- setup/common.sh@32 -- # continue 00:02:50.968 04:00:05 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.968 04:00:05 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.968 04:00:05 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.968 04:00:05 -- setup/common.sh@32 -- # continue 00:02:50.968 04:00:05 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.968 04:00:05 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.968 04:00:05 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.968 04:00:05 -- setup/common.sh@32 -- # continue 00:02:50.968 04:00:05 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.968 04:00:05 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.968 04:00:05 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.968 04:00:05 -- setup/common.sh@32 -- # continue 00:02:50.968 04:00:05 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.968 04:00:05 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.968 04:00:05 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.968 04:00:05 -- setup/common.sh@32 -- # continue 00:02:50.968 04:00:05 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.968 04:00:05 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.968 04:00:05 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.968 04:00:05 -- setup/common.sh@32 -- # continue 00:02:50.968 04:00:05 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.968 04:00:05 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.968 04:00:05 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.968 04:00:05 -- setup/common.sh@32 -- # continue 00:02:50.968 04:00:05 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.968 04:00:05 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.968 04:00:05 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.968 04:00:05 -- setup/common.sh@32 -- # continue 00:02:50.968 04:00:05 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.968 04:00:05 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.968 04:00:05 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.968 04:00:05 -- setup/common.sh@32 -- # continue 00:02:50.968 04:00:05 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.968 04:00:05 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.968 04:00:05 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.968 04:00:05 -- setup/common.sh@32 -- # continue 00:02:50.968 04:00:05 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.968 04:00:05 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.968 04:00:05 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.968 04:00:05 -- setup/common.sh@32 -- # continue 00:02:50.968 04:00:05 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.968 04:00:05 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.968 04:00:05 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.968 04:00:05 -- setup/common.sh@32 -- # continue 00:02:50.968 04:00:05 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.968 04:00:05 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.968 04:00:05 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.968 04:00:05 -- setup/common.sh@32 -- # continue 00:02:50.968 04:00:05 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.968 04:00:05 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.968 04:00:05 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.968 04:00:05 -- setup/common.sh@32 -- # continue 00:02:50.968 04:00:05 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.968 04:00:05 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.968 04:00:05 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.968 04:00:05 -- setup/common.sh@32 -- # continue 00:02:50.968 04:00:05 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.968 04:00:05 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.968 04:00:05 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.968 04:00:05 -- setup/common.sh@32 -- # continue 00:02:50.968 04:00:05 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.968 04:00:05 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.968 04:00:05 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.968 04:00:05 -- setup/common.sh@32 -- # continue 00:02:50.968 04:00:05 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.968 04:00:05 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.968 04:00:05 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.968 04:00:05 -- setup/common.sh@32 -- # continue 00:02:50.968 04:00:05 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.968 04:00:05 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.968 04:00:05 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.968 04:00:05 -- setup/common.sh@32 -- # continue 00:02:50.968 04:00:05 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.968 04:00:05 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.968 04:00:05 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.968 04:00:05 -- setup/common.sh@32 -- # continue 00:02:50.968 04:00:05 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.968 04:00:05 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.968 04:00:05 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.968 04:00:05 -- setup/common.sh@32 -- # continue 00:02:50.968 04:00:05 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.968 04:00:05 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.968 04:00:05 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.968 04:00:05 -- setup/common.sh@32 -- # continue 00:02:50.968 04:00:05 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.968 04:00:05 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.968 04:00:05 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.968 04:00:05 -- setup/common.sh@32 -- # continue 00:02:50.968 04:00:05 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.968 04:00:05 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.968 04:00:05 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.968 04:00:05 -- setup/common.sh@32 -- # continue 00:02:50.968 04:00:05 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.968 04:00:05 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.968 04:00:05 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.968 04:00:05 -- setup/common.sh@32 -- # continue 00:02:50.968 04:00:05 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.968 04:00:05 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.968 04:00:05 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.968 04:00:05 -- setup/common.sh@32 -- # continue 00:02:50.968 04:00:05 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.968 04:00:05 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.968 04:00:05 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.968 04:00:05 -- setup/common.sh@32 -- # continue 00:02:50.968 04:00:05 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.968 04:00:05 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.968 04:00:05 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.968 04:00:05 -- setup/common.sh@32 -- # continue 00:02:50.968 04:00:05 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.968 04:00:05 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.968 04:00:05 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.968 04:00:05 -- setup/common.sh@32 -- # continue 00:02:50.968 04:00:05 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.968 04:00:05 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.968 04:00:05 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.968 04:00:05 -- setup/common.sh@32 -- # continue 00:02:50.968 04:00:05 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.968 04:00:05 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.968 04:00:05 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.968 04:00:05 -- setup/common.sh@32 -- # continue 00:02:50.968 04:00:05 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.968 04:00:05 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.968 04:00:05 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.968 04:00:05 -- setup/common.sh@32 -- # continue 00:02:50.968 04:00:05 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.968 04:00:05 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.968 04:00:05 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.968 04:00:05 -- setup/common.sh@32 -- # continue 00:02:50.968 04:00:05 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.968 04:00:05 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.968 04:00:05 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.968 04:00:05 -- setup/common.sh@32 -- # continue 00:02:50.968 04:00:05 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.968 04:00:05 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.968 04:00:05 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.968 04:00:05 -- setup/common.sh@32 -- # continue 00:02:50.968 04:00:05 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.968 04:00:05 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.968 04:00:05 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.968 04:00:05 -- setup/common.sh@32 -- # continue 00:02:50.968 04:00:05 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.968 04:00:05 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.968 04:00:05 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.968 04:00:05 -- setup/common.sh@33 -- # echo 0 00:02:50.968 04:00:05 -- setup/common.sh@33 -- # return 0 00:02:50.968 04:00:05 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:02:50.968 04:00:05 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:02:50.968 04:00:05 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:02:50.968 04:00:05 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:02:50.968 04:00:05 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:50.968 04:00:05 -- setup/common.sh@18 -- # local node=1 00:02:50.968 04:00:05 -- setup/common.sh@19 -- # local var val 00:02:50.968 04:00:05 -- setup/common.sh@20 -- # local mem_f mem 00:02:50.968 04:00:05 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:50.968 04:00:05 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:02:50.968 04:00:05 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:02:50.968 04:00:05 -- setup/common.sh@28 -- # mapfile -t mem 00:02:50.968 04:00:05 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:50.968 04:00:05 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.968 04:00:05 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.968 04:00:05 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126742248 kB' 'MemFree: 121060884 kB' 'MemUsed: 5681364 kB' 'SwapCached: 0 kB' 'Active: 1407456 kB' 'Inactive: 398168 kB' 'Active(anon): 1249516 kB' 'Inactive(anon): 0 kB' 'Active(file): 157940 kB' 'Inactive(file): 398168 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 1600204 kB' 'Mapped: 48868 kB' 'AnonPages: 205564 kB' 'Shmem: 1044096 kB' 'KernelStack: 10808 kB' 'PageTables: 2488 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 98176 kB' 'Slab: 406040 kB' 'SReclaimable: 98176 kB' 'SUnreclaim: 307864 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:02:50.968 04:00:05 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.968 04:00:05 -- setup/common.sh@32 -- # continue 00:02:50.968 04:00:05 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.968 04:00:05 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.968 04:00:05 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.968 04:00:05 -- setup/common.sh@32 -- # continue 00:02:50.968 04:00:05 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.968 04:00:05 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.968 04:00:05 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.968 04:00:05 -- setup/common.sh@32 -- # continue 00:02:50.968 04:00:05 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.968 04:00:05 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.968 04:00:05 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.968 04:00:05 -- setup/common.sh@32 -- # continue 00:02:50.968 04:00:05 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.968 04:00:05 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.968 04:00:05 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.968 04:00:05 -- setup/common.sh@32 -- # continue 00:02:50.968 04:00:05 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.968 04:00:05 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.968 04:00:05 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.968 04:00:05 -- setup/common.sh@32 -- # continue 00:02:50.968 04:00:05 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.968 04:00:05 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.968 04:00:05 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.968 04:00:05 -- setup/common.sh@32 -- # continue 00:02:50.968 04:00:05 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.968 04:00:05 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.968 04:00:05 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.968 04:00:05 -- setup/common.sh@32 -- # continue 00:02:50.968 04:00:05 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.968 04:00:05 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.968 04:00:05 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.968 04:00:05 -- setup/common.sh@32 -- # continue 00:02:50.968 04:00:05 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.968 04:00:05 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.968 04:00:05 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.968 04:00:05 -- setup/common.sh@32 -- # continue 00:02:50.968 04:00:05 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.968 04:00:05 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.968 04:00:05 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.968 04:00:05 -- setup/common.sh@32 -- # continue 00:02:50.968 04:00:05 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.968 04:00:05 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.968 04:00:05 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.968 04:00:05 -- setup/common.sh@32 -- # continue 00:02:50.968 04:00:05 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.968 04:00:05 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.968 04:00:05 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.968 04:00:05 -- setup/common.sh@32 -- # continue 00:02:50.968 04:00:05 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.968 04:00:05 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.968 04:00:05 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.968 04:00:05 -- setup/common.sh@32 -- # continue 00:02:50.968 04:00:05 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.968 04:00:05 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.968 04:00:05 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.968 04:00:05 -- setup/common.sh@32 -- # continue 00:02:50.968 04:00:05 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.968 04:00:05 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.968 04:00:05 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.968 04:00:05 -- setup/common.sh@32 -- # continue 00:02:50.968 04:00:05 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.968 04:00:05 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.968 04:00:05 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.968 04:00:05 -- setup/common.sh@32 -- # continue 00:02:50.968 04:00:05 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.968 04:00:05 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.968 04:00:05 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.968 04:00:05 -- setup/common.sh@32 -- # continue 00:02:50.968 04:00:05 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.968 04:00:05 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.968 04:00:05 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.968 04:00:05 -- setup/common.sh@32 -- # continue 00:02:50.968 04:00:05 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.968 04:00:05 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.968 04:00:05 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.968 04:00:05 -- setup/common.sh@32 -- # continue 00:02:50.968 04:00:05 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.968 04:00:05 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.968 04:00:05 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.968 04:00:05 -- setup/common.sh@32 -- # continue 00:02:50.968 04:00:05 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.968 04:00:05 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.968 04:00:05 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.968 04:00:05 -- setup/common.sh@32 -- # continue 00:02:50.968 04:00:05 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.968 04:00:05 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.968 04:00:05 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.968 04:00:05 -- setup/common.sh@32 -- # continue 00:02:50.968 04:00:05 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.968 04:00:05 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.968 04:00:05 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.968 04:00:05 -- setup/common.sh@32 -- # continue 00:02:50.968 04:00:05 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.968 04:00:05 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.968 04:00:05 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.968 04:00:05 -- setup/common.sh@32 -- # continue 00:02:50.968 04:00:05 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.968 04:00:05 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.968 04:00:05 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.968 04:00:05 -- setup/common.sh@32 -- # continue 00:02:50.968 04:00:05 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.968 04:00:05 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.968 04:00:05 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.968 04:00:05 -- setup/common.sh@32 -- # continue 00:02:50.968 04:00:05 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.968 04:00:05 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.968 04:00:05 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.968 04:00:05 -- setup/common.sh@32 -- # continue 00:02:50.968 04:00:05 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.968 04:00:05 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.968 04:00:05 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.968 04:00:05 -- setup/common.sh@32 -- # continue 00:02:50.968 04:00:05 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.968 04:00:05 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.968 04:00:05 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.968 04:00:05 -- setup/common.sh@32 -- # continue 00:02:50.968 04:00:05 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.968 04:00:05 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.968 04:00:05 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.968 04:00:05 -- setup/common.sh@32 -- # continue 00:02:50.968 04:00:05 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.968 04:00:05 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.968 04:00:05 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.968 04:00:05 -- setup/common.sh@32 -- # continue 00:02:50.968 04:00:05 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.968 04:00:05 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.968 04:00:05 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.968 04:00:05 -- setup/common.sh@32 -- # continue 00:02:50.968 04:00:05 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.968 04:00:05 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.968 04:00:05 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.968 04:00:05 -- setup/common.sh@32 -- # continue 00:02:50.968 04:00:05 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.968 04:00:05 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.968 04:00:05 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.968 04:00:05 -- setup/common.sh@32 -- # continue 00:02:50.968 04:00:05 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.968 04:00:05 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.968 04:00:05 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.968 04:00:05 -- setup/common.sh@32 -- # continue 00:02:50.968 04:00:05 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.968 04:00:05 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.968 04:00:05 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.968 04:00:05 -- setup/common.sh@33 -- # echo 0 00:02:50.968 04:00:05 -- setup/common.sh@33 -- # return 0 00:02:50.968 04:00:05 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:02:50.968 04:00:05 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:02:50.968 04:00:05 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:02:50.968 04:00:05 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:02:50.968 04:00:05 -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:02:50.968 node0=512 expecting 512 00:02:50.968 04:00:05 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:02:50.968 04:00:05 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:02:50.968 04:00:05 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:02:50.968 04:00:05 -- setup/hugepages.sh@128 -- # echo 'node1=1024 expecting 1024' 00:02:50.968 node1=1024 expecting 1024 00:02:50.968 04:00:05 -- setup/hugepages.sh@130 -- # [[ 512,1024 == \5\1\2\,\1\0\2\4 ]] 00:02:50.968 00:02:50.968 real 0m3.228s 00:02:50.968 user 0m1.133s 00:02:50.968 sys 0m1.972s 00:02:50.968 04:00:05 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:02:50.968 04:00:05 -- common/autotest_common.sh@10 -- # set +x 00:02:50.968 ************************************ 00:02:50.968 END TEST custom_alloc 00:02:50.968 ************************************ 00:02:50.968 04:00:05 -- setup/hugepages.sh@215 -- # run_test no_shrink_alloc no_shrink_alloc 00:02:50.968 04:00:05 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:02:50.968 04:00:05 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:02:50.968 04:00:05 -- common/autotest_common.sh@10 -- # set +x 00:02:50.968 ************************************ 00:02:50.968 START TEST no_shrink_alloc 00:02:50.969 ************************************ 00:02:50.969 04:00:05 -- common/autotest_common.sh@1104 -- # no_shrink_alloc 00:02:50.969 04:00:05 -- setup/hugepages.sh@195 -- # get_test_nr_hugepages 2097152 0 00:02:50.969 04:00:05 -- setup/hugepages.sh@49 -- # local size=2097152 00:02:50.969 04:00:05 -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:02:50.969 04:00:05 -- setup/hugepages.sh@51 -- # shift 00:02:50.969 04:00:05 -- setup/hugepages.sh@52 -- # node_ids=('0') 00:02:50.969 04:00:05 -- setup/hugepages.sh@52 -- # local node_ids 00:02:50.969 04:00:05 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:02:50.969 04:00:05 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:02:50.969 04:00:05 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:02:50.969 04:00:05 -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:02:50.969 04:00:05 -- setup/hugepages.sh@62 -- # local user_nodes 00:02:50.969 04:00:05 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:02:50.969 04:00:05 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:02:50.969 04:00:05 -- setup/hugepages.sh@67 -- # nodes_test=() 00:02:50.969 04:00:05 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:02:50.969 04:00:05 -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:02:50.969 04:00:05 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:02:50.969 04:00:05 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:02:50.969 04:00:05 -- setup/hugepages.sh@73 -- # return 0 00:02:50.969 04:00:05 -- setup/hugepages.sh@198 -- # setup output 00:02:50.969 04:00:05 -- setup/common.sh@9 -- # [[ output == output ]] 00:02:50.969 04:00:05 -- setup/common.sh@10 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/setup.sh 00:02:54.271 0000:74:02.0 (8086 0cfe): Already using the vfio-pci driver 00:02:54.271 0000:c9:00.0 (8086 0a54): Already using the vfio-pci driver 00:02:54.271 0000:f1:02.0 (8086 0cfe): Already using the vfio-pci driver 00:02:54.271 0000:79:02.0 (8086 0cfe): Already using the vfio-pci driver 00:02:54.271 0000:6f:01.0 (8086 0b25): Already using the vfio-pci driver 00:02:54.271 0000:6f:02.0 (8086 0cfe): Already using the vfio-pci driver 00:02:54.271 0000:f6:01.0 (8086 0b25): Already using the vfio-pci driver 00:02:54.271 0000:f6:02.0 (8086 0cfe): Already using the vfio-pci driver 00:02:54.271 0000:74:01.0 (8086 0b25): Already using the vfio-pci driver 00:02:54.271 0000:6a:02.0 (8086 0cfe): Already using the vfio-pci driver 00:02:54.271 0000:79:01.0 (8086 0b25): Already using the vfio-pci driver 00:02:54.271 0000:ec:01.0 (8086 0b25): Already using the vfio-pci driver 00:02:54.271 0000:6a:01.0 (8086 0b25): Already using the vfio-pci driver 00:02:54.271 0000:ec:02.0 (8086 0cfe): Already using the vfio-pci driver 00:02:54.271 0000:ca:00.0 (8086 0a54): Already using the vfio-pci driver 00:02:54.271 0000:e7:01.0 (8086 0b25): Already using the vfio-pci driver 00:02:54.271 0000:e7:02.0 (8086 0cfe): Already using the vfio-pci driver 00:02:54.271 0000:f1:01.0 (8086 0b25): Already using the vfio-pci driver 00:02:54.271 04:00:08 -- setup/hugepages.sh@199 -- # verify_nr_hugepages 00:02:54.271 04:00:08 -- setup/hugepages.sh@89 -- # local node 00:02:54.271 04:00:08 -- setup/hugepages.sh@90 -- # local sorted_t 00:02:54.271 04:00:08 -- setup/hugepages.sh@91 -- # local sorted_s 00:02:54.271 04:00:08 -- setup/hugepages.sh@92 -- # local surp 00:02:54.271 04:00:08 -- setup/hugepages.sh@93 -- # local resv 00:02:54.271 04:00:08 -- setup/hugepages.sh@94 -- # local anon 00:02:54.271 04:00:08 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:02:54.271 04:00:08 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:02:54.271 04:00:08 -- setup/common.sh@17 -- # local get=AnonHugePages 00:02:54.271 04:00:08 -- setup/common.sh@18 -- # local node= 00:02:54.271 04:00:08 -- setup/common.sh@19 -- # local var val 00:02:54.271 04:00:08 -- setup/common.sh@20 -- # local mem_f mem 00:02:54.271 04:00:08 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:54.271 04:00:08 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:54.271 04:00:08 -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:54.271 04:00:08 -- setup/common.sh@28 -- # mapfile -t mem 00:02:54.271 04:00:08 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:54.271 04:00:08 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.271 04:00:08 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.271 04:00:08 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 258558476 kB' 'MemFree: 241600980 kB' 'MemAvailable: 245237824 kB' 'Buffers: 2696 kB' 'Cached: 10893564 kB' 'SwapCached: 0 kB' 'Active: 7033132 kB' 'Inactive: 4390304 kB' 'Active(anon): 6462316 kB' 'Inactive(anon): 0 kB' 'Active(file): 570816 kB' 'Inactive(file): 4390304 kB' 'Unevictable: 9000 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 536344 kB' 'Mapped: 210976 kB' 'Shmem: 5935140 kB' 'KReclaimable: 309552 kB' 'Slab: 954388 kB' 'SReclaimable: 309552 kB' 'SUnreclaim: 644836 kB' 'KernelStack: 24528 kB' 'PageTables: 8696 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 136619264 kB' 'Committed_AS: 8054800 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 329556 kB' 'VmallocChunk: 0 kB' 'Percpu: 104448 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3262528 kB' 'DirectMap2M: 16437248 kB' 'DirectMap1G: 250609664 kB' 00:02:54.271 04:00:08 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:54.271 04:00:08 -- setup/common.sh@32 -- # continue 00:02:54.271 04:00:08 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.271 04:00:08 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.271 04:00:08 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:54.271 04:00:08 -- setup/common.sh@32 -- # continue 00:02:54.271 04:00:08 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.271 04:00:08 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.271 04:00:08 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:54.271 04:00:08 -- setup/common.sh@32 -- # continue 00:02:54.271 04:00:08 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.271 04:00:08 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.271 04:00:08 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:54.271 04:00:08 -- setup/common.sh@32 -- # continue 00:02:54.271 04:00:08 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.271 04:00:08 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.271 04:00:08 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:54.271 04:00:08 -- setup/common.sh@32 -- # continue 00:02:54.271 04:00:08 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.271 04:00:08 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.271 04:00:08 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:54.271 04:00:08 -- setup/common.sh@32 -- # continue 00:02:54.271 04:00:08 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.271 04:00:08 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.271 04:00:08 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:54.271 04:00:08 -- setup/common.sh@32 -- # continue 00:02:54.271 04:00:08 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.271 04:00:08 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.271 04:00:08 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:54.271 04:00:08 -- setup/common.sh@32 -- # continue 00:02:54.271 04:00:08 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.271 04:00:08 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.271 04:00:08 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:54.271 04:00:08 -- setup/common.sh@32 -- # continue 00:02:54.271 04:00:08 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.271 04:00:08 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.271 04:00:08 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:54.271 04:00:08 -- setup/common.sh@32 -- # continue 00:02:54.271 04:00:08 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.271 04:00:08 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.271 04:00:08 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:54.271 04:00:08 -- setup/common.sh@32 -- # continue 00:02:54.271 04:00:08 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.271 04:00:08 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.271 04:00:08 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:54.271 04:00:08 -- setup/common.sh@32 -- # continue 00:02:54.271 04:00:08 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.271 04:00:08 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.271 04:00:08 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:54.271 04:00:08 -- setup/common.sh@32 -- # continue 00:02:54.271 04:00:08 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.271 04:00:08 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.271 04:00:08 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:54.271 04:00:08 -- setup/common.sh@32 -- # continue 00:02:54.271 04:00:08 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.271 04:00:08 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.271 04:00:08 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:54.271 04:00:08 -- setup/common.sh@32 -- # continue 00:02:54.271 04:00:08 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.271 04:00:08 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.271 04:00:08 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:54.271 04:00:08 -- setup/common.sh@32 -- # continue 00:02:54.271 04:00:08 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.271 04:00:08 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.271 04:00:08 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:54.271 04:00:08 -- setup/common.sh@32 -- # continue 00:02:54.271 04:00:08 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.271 04:00:08 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.271 04:00:08 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:54.271 04:00:08 -- setup/common.sh@32 -- # continue 00:02:54.271 04:00:08 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.271 04:00:08 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.271 04:00:08 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:54.271 04:00:08 -- setup/common.sh@32 -- # continue 00:02:54.271 04:00:08 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.271 04:00:08 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.271 04:00:08 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:54.271 04:00:08 -- setup/common.sh@32 -- # continue 00:02:54.271 04:00:08 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.271 04:00:08 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.271 04:00:08 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:54.271 04:00:08 -- setup/common.sh@32 -- # continue 00:02:54.271 04:00:08 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.271 04:00:08 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.271 04:00:08 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:54.271 04:00:08 -- setup/common.sh@32 -- # continue 00:02:54.271 04:00:08 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.271 04:00:08 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.272 04:00:08 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:54.272 04:00:08 -- setup/common.sh@32 -- # continue 00:02:54.272 04:00:08 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.272 04:00:08 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.272 04:00:08 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:54.272 04:00:08 -- setup/common.sh@32 -- # continue 00:02:54.272 04:00:08 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.272 04:00:08 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.272 04:00:08 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:54.272 04:00:08 -- setup/common.sh@32 -- # continue 00:02:54.272 04:00:08 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.272 04:00:08 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.272 04:00:08 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:54.272 04:00:08 -- setup/common.sh@32 -- # continue 00:02:54.272 04:00:08 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.272 04:00:08 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.272 04:00:08 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:54.272 04:00:08 -- setup/common.sh@32 -- # continue 00:02:54.272 04:00:08 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.272 04:00:08 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.272 04:00:08 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:54.272 04:00:08 -- setup/common.sh@32 -- # continue 00:02:54.272 04:00:08 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.272 04:00:08 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.272 04:00:08 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:54.272 04:00:08 -- setup/common.sh@32 -- # continue 00:02:54.272 04:00:08 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.272 04:00:08 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.272 04:00:08 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:54.272 04:00:08 -- setup/common.sh@32 -- # continue 00:02:54.272 04:00:08 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.272 04:00:08 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.272 04:00:08 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:54.272 04:00:08 -- setup/common.sh@32 -- # continue 00:02:54.272 04:00:08 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.272 04:00:08 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.272 04:00:08 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:54.272 04:00:08 -- setup/common.sh@32 -- # continue 00:02:54.272 04:00:08 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.272 04:00:08 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.272 04:00:08 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:54.272 04:00:08 -- setup/common.sh@32 -- # continue 00:02:54.272 04:00:08 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.272 04:00:08 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.272 04:00:08 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:54.272 04:00:08 -- setup/common.sh@32 -- # continue 00:02:54.272 04:00:08 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.272 04:00:08 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.272 04:00:08 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:54.272 04:00:08 -- setup/common.sh@32 -- # continue 00:02:54.272 04:00:08 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.272 04:00:08 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.272 04:00:08 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:54.272 04:00:08 -- setup/common.sh@32 -- # continue 00:02:54.272 04:00:08 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.272 04:00:08 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.272 04:00:08 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:54.272 04:00:08 -- setup/common.sh@32 -- # continue 00:02:54.272 04:00:08 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.272 04:00:08 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.272 04:00:08 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:54.272 04:00:08 -- setup/common.sh@32 -- # continue 00:02:54.272 04:00:08 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.272 04:00:08 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.272 04:00:08 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:54.272 04:00:08 -- setup/common.sh@32 -- # continue 00:02:54.272 04:00:08 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.272 04:00:08 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.272 04:00:08 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:54.272 04:00:08 -- setup/common.sh@32 -- # continue 00:02:54.272 04:00:08 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.272 04:00:08 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.272 04:00:08 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:54.272 04:00:08 -- setup/common.sh@33 -- # echo 0 00:02:54.272 04:00:08 -- setup/common.sh@33 -- # return 0 00:02:54.272 04:00:08 -- setup/hugepages.sh@97 -- # anon=0 00:02:54.272 04:00:08 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:02:54.272 04:00:08 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:54.272 04:00:08 -- setup/common.sh@18 -- # local node= 00:02:54.272 04:00:08 -- setup/common.sh@19 -- # local var val 00:02:54.272 04:00:08 -- setup/common.sh@20 -- # local mem_f mem 00:02:54.272 04:00:08 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:54.272 04:00:08 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:54.272 04:00:08 -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:54.272 04:00:08 -- setup/common.sh@28 -- # mapfile -t mem 00:02:54.272 04:00:08 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:54.272 04:00:08 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.272 04:00:08 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.272 04:00:08 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 258558476 kB' 'MemFree: 241601144 kB' 'MemAvailable: 245237988 kB' 'Buffers: 2696 kB' 'Cached: 10893564 kB' 'SwapCached: 0 kB' 'Active: 7033176 kB' 'Inactive: 4390304 kB' 'Active(anon): 6462360 kB' 'Inactive(anon): 0 kB' 'Active(file): 570816 kB' 'Inactive(file): 4390304 kB' 'Unevictable: 9000 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 535984 kB' 'Mapped: 210836 kB' 'Shmem: 5935140 kB' 'KReclaimable: 309552 kB' 'Slab: 954496 kB' 'SReclaimable: 309552 kB' 'SUnreclaim: 644944 kB' 'KernelStack: 24512 kB' 'PageTables: 8612 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 136619264 kB' 'Committed_AS: 8054812 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 329508 kB' 'VmallocChunk: 0 kB' 'Percpu: 104448 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3262528 kB' 'DirectMap2M: 16437248 kB' 'DirectMap1G: 250609664 kB' 00:02:54.272 04:00:08 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.272 04:00:08 -- setup/common.sh@32 -- # continue 00:02:54.272 04:00:08 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.272 04:00:08 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.272 04:00:08 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.272 04:00:08 -- setup/common.sh@32 -- # continue 00:02:54.272 04:00:08 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.272 04:00:08 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.272 04:00:08 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.272 04:00:08 -- setup/common.sh@32 -- # continue 00:02:54.272 04:00:08 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.272 04:00:08 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.272 04:00:08 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.272 04:00:08 -- setup/common.sh@32 -- # continue 00:02:54.272 04:00:08 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.272 04:00:08 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.272 04:00:08 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.272 04:00:08 -- setup/common.sh@32 -- # continue 00:02:54.272 04:00:08 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.272 04:00:08 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.272 04:00:08 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.272 04:00:08 -- setup/common.sh@32 -- # continue 00:02:54.272 04:00:08 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.272 04:00:08 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.272 04:00:08 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.272 04:00:08 -- setup/common.sh@32 -- # continue 00:02:54.272 04:00:08 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.272 04:00:08 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.272 04:00:08 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.272 04:00:08 -- setup/common.sh@32 -- # continue 00:02:54.272 04:00:08 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.272 04:00:08 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.272 04:00:08 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.272 04:00:08 -- setup/common.sh@32 -- # continue 00:02:54.272 04:00:08 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.272 04:00:08 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.272 04:00:08 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.272 04:00:08 -- setup/common.sh@32 -- # continue 00:02:54.272 04:00:08 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.272 04:00:08 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.272 04:00:08 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.272 04:00:08 -- setup/common.sh@32 -- # continue 00:02:54.272 04:00:08 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.272 04:00:08 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.272 04:00:08 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.272 04:00:08 -- setup/common.sh@32 -- # continue 00:02:54.272 04:00:08 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.272 04:00:08 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.272 04:00:08 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.272 04:00:08 -- setup/common.sh@32 -- # continue 00:02:54.272 04:00:08 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.272 04:00:08 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.272 04:00:08 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.273 04:00:08 -- setup/common.sh@32 -- # continue 00:02:54.273 04:00:08 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.273 04:00:08 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.273 04:00:08 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.273 04:00:08 -- setup/common.sh@32 -- # continue 00:02:54.273 04:00:08 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.273 04:00:08 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.273 04:00:08 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.273 04:00:08 -- setup/common.sh@32 -- # continue 00:02:54.273 04:00:08 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.273 04:00:08 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.273 04:00:08 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.273 04:00:08 -- setup/common.sh@32 -- # continue 00:02:54.273 04:00:08 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.273 04:00:08 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.273 04:00:08 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.273 04:00:08 -- setup/common.sh@32 -- # continue 00:02:54.273 04:00:08 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.273 04:00:08 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.273 04:00:08 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.273 04:00:08 -- setup/common.sh@32 -- # continue 00:02:54.273 04:00:08 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.273 04:00:08 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.273 04:00:08 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.273 04:00:08 -- setup/common.sh@32 -- # continue 00:02:54.273 04:00:08 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.273 04:00:08 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.273 04:00:08 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.273 04:00:08 -- setup/common.sh@32 -- # continue 00:02:54.273 04:00:08 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.273 04:00:08 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.273 04:00:08 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.273 04:00:08 -- setup/common.sh@32 -- # continue 00:02:54.273 04:00:08 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.273 04:00:08 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.273 04:00:08 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.273 04:00:08 -- setup/common.sh@32 -- # continue 00:02:54.273 04:00:08 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.273 04:00:08 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.273 04:00:08 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.273 04:00:08 -- setup/common.sh@32 -- # continue 00:02:54.273 04:00:08 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.273 04:00:08 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.273 04:00:08 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.273 04:00:08 -- setup/common.sh@32 -- # continue 00:02:54.273 04:00:08 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.273 04:00:08 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.273 04:00:08 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.273 04:00:08 -- setup/common.sh@32 -- # continue 00:02:54.273 04:00:08 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.273 04:00:08 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.273 04:00:08 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.273 04:00:08 -- setup/common.sh@32 -- # continue 00:02:54.273 04:00:08 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.273 04:00:08 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.273 04:00:08 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.273 04:00:08 -- setup/common.sh@32 -- # continue 00:02:54.273 04:00:08 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.273 04:00:08 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.273 04:00:08 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.273 04:00:08 -- setup/common.sh@32 -- # continue 00:02:54.273 04:00:08 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.273 04:00:08 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.273 04:00:08 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.273 04:00:08 -- setup/common.sh@32 -- # continue 00:02:54.273 04:00:08 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.273 04:00:08 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.273 04:00:08 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.273 04:00:08 -- setup/common.sh@32 -- # continue 00:02:54.273 04:00:08 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.273 04:00:08 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.273 04:00:08 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.273 04:00:08 -- setup/common.sh@32 -- # continue 00:02:54.273 04:00:08 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.273 04:00:08 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.273 04:00:08 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.273 04:00:08 -- setup/common.sh@32 -- # continue 00:02:54.273 04:00:08 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.273 04:00:08 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.273 04:00:08 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.273 04:00:08 -- setup/common.sh@32 -- # continue 00:02:54.273 04:00:08 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.273 04:00:08 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.273 04:00:08 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.273 04:00:08 -- setup/common.sh@32 -- # continue 00:02:54.273 04:00:08 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.273 04:00:08 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.273 04:00:08 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.273 04:00:08 -- setup/common.sh@32 -- # continue 00:02:54.273 04:00:08 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.273 04:00:08 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.273 04:00:08 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.273 04:00:08 -- setup/common.sh@32 -- # continue 00:02:54.273 04:00:08 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.273 04:00:08 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.273 04:00:08 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.273 04:00:08 -- setup/common.sh@32 -- # continue 00:02:54.273 04:00:08 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.273 04:00:08 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.273 04:00:08 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.273 04:00:08 -- setup/common.sh@32 -- # continue 00:02:54.273 04:00:08 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.273 04:00:08 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.273 04:00:08 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.273 04:00:08 -- setup/common.sh@32 -- # continue 00:02:54.273 04:00:08 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.273 04:00:08 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.273 04:00:08 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.273 04:00:08 -- setup/common.sh@32 -- # continue 00:02:54.273 04:00:08 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.273 04:00:08 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.273 04:00:08 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.273 04:00:08 -- setup/common.sh@32 -- # continue 00:02:54.273 04:00:08 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.273 04:00:08 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.273 04:00:08 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.273 04:00:08 -- setup/common.sh@32 -- # continue 00:02:54.273 04:00:08 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.273 04:00:08 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.273 04:00:08 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.273 04:00:08 -- setup/common.sh@32 -- # continue 00:02:54.273 04:00:08 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.273 04:00:08 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.273 04:00:08 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.273 04:00:08 -- setup/common.sh@32 -- # continue 00:02:54.273 04:00:08 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.273 04:00:08 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.273 04:00:08 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.273 04:00:08 -- setup/common.sh@32 -- # continue 00:02:54.273 04:00:08 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.273 04:00:08 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.273 04:00:08 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.273 04:00:08 -- setup/common.sh@32 -- # continue 00:02:54.273 04:00:08 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.273 04:00:08 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.273 04:00:08 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.273 04:00:08 -- setup/common.sh@32 -- # continue 00:02:54.273 04:00:08 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.273 04:00:08 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.273 04:00:08 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.273 04:00:08 -- setup/common.sh@32 -- # continue 00:02:54.273 04:00:08 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.273 04:00:08 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.273 04:00:08 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.273 04:00:08 -- setup/common.sh@32 -- # continue 00:02:54.273 04:00:08 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.273 04:00:08 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.273 04:00:08 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.273 04:00:08 -- setup/common.sh@32 -- # continue 00:02:54.273 04:00:08 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.273 04:00:08 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.273 04:00:08 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.273 04:00:08 -- setup/common.sh@33 -- # echo 0 00:02:54.273 04:00:08 -- setup/common.sh@33 -- # return 0 00:02:54.273 04:00:08 -- setup/hugepages.sh@99 -- # surp=0 00:02:54.273 04:00:08 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:02:54.273 04:00:08 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:02:54.273 04:00:08 -- setup/common.sh@18 -- # local node= 00:02:54.273 04:00:08 -- setup/common.sh@19 -- # local var val 00:02:54.273 04:00:08 -- setup/common.sh@20 -- # local mem_f mem 00:02:54.273 04:00:08 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:54.273 04:00:08 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:54.273 04:00:08 -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:54.273 04:00:08 -- setup/common.sh@28 -- # mapfile -t mem 00:02:54.273 04:00:08 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:54.273 04:00:08 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.274 04:00:08 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.274 04:00:08 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 258558476 kB' 'MemFree: 241603252 kB' 'MemAvailable: 245240096 kB' 'Buffers: 2696 kB' 'Cached: 10893576 kB' 'SwapCached: 0 kB' 'Active: 7032484 kB' 'Inactive: 4390304 kB' 'Active(anon): 6461668 kB' 'Inactive(anon): 0 kB' 'Active(file): 570816 kB' 'Inactive(file): 4390304 kB' 'Unevictable: 9000 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 535708 kB' 'Mapped: 210756 kB' 'Shmem: 5935152 kB' 'KReclaimable: 309552 kB' 'Slab: 954496 kB' 'SReclaimable: 309552 kB' 'SUnreclaim: 644944 kB' 'KernelStack: 24480 kB' 'PageTables: 8500 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 136619264 kB' 'Committed_AS: 8054824 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 329508 kB' 'VmallocChunk: 0 kB' 'Percpu: 104448 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3262528 kB' 'DirectMap2M: 16437248 kB' 'DirectMap1G: 250609664 kB' 00:02:54.274 04:00:08 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:54.274 04:00:08 -- setup/common.sh@32 -- # continue 00:02:54.274 04:00:08 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.274 04:00:08 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.274 04:00:08 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:54.274 04:00:08 -- setup/common.sh@32 -- # continue 00:02:54.274 04:00:08 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.274 04:00:08 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.274 04:00:08 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:54.274 04:00:08 -- setup/common.sh@32 -- # continue 00:02:54.274 04:00:08 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.274 04:00:08 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.274 04:00:08 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:54.274 04:00:08 -- setup/common.sh@32 -- # continue 00:02:54.274 04:00:08 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.274 04:00:08 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.274 04:00:08 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:54.274 04:00:08 -- setup/common.sh@32 -- # continue 00:02:54.274 04:00:08 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.274 04:00:08 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.274 04:00:08 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:54.274 04:00:08 -- setup/common.sh@32 -- # continue 00:02:54.274 04:00:08 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.274 04:00:08 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.274 04:00:08 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:54.274 04:00:08 -- setup/common.sh@32 -- # continue 00:02:54.274 04:00:08 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.274 04:00:08 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.274 04:00:08 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:54.274 04:00:08 -- setup/common.sh@32 -- # continue 00:02:54.274 04:00:08 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.274 04:00:08 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.274 04:00:08 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:54.274 04:00:08 -- setup/common.sh@32 -- # continue 00:02:54.274 04:00:08 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.274 04:00:08 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.274 04:00:08 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:54.274 04:00:08 -- setup/common.sh@32 -- # continue 00:02:54.274 04:00:08 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.274 04:00:08 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.274 04:00:08 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:54.274 04:00:08 -- setup/common.sh@32 -- # continue 00:02:54.274 04:00:08 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.274 04:00:08 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.274 04:00:08 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:54.274 04:00:08 -- setup/common.sh@32 -- # continue 00:02:54.274 04:00:08 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.274 04:00:08 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.274 04:00:08 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:54.274 04:00:08 -- setup/common.sh@32 -- # continue 00:02:54.274 04:00:08 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.274 04:00:08 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.274 04:00:08 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:54.274 04:00:08 -- setup/common.sh@32 -- # continue 00:02:54.274 04:00:08 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.274 04:00:08 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.274 04:00:08 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:54.274 04:00:08 -- setup/common.sh@32 -- # continue 00:02:54.274 04:00:08 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.274 04:00:08 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.274 04:00:08 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:54.274 04:00:08 -- setup/common.sh@32 -- # continue 00:02:54.274 04:00:08 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.274 04:00:08 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.274 04:00:08 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:54.274 04:00:08 -- setup/common.sh@32 -- # continue 00:02:54.274 04:00:08 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.274 04:00:08 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.274 04:00:08 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:54.274 04:00:08 -- setup/common.sh@32 -- # continue 00:02:54.274 04:00:08 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.274 04:00:08 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.274 04:00:08 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:54.274 04:00:08 -- setup/common.sh@32 -- # continue 00:02:54.274 04:00:08 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.274 04:00:08 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.274 04:00:08 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:54.274 04:00:08 -- setup/common.sh@32 -- # continue 00:02:54.274 04:00:08 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.274 04:00:08 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.274 04:00:08 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:54.274 04:00:08 -- setup/common.sh@32 -- # continue 00:02:54.274 04:00:08 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.274 04:00:08 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.274 04:00:08 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:54.274 04:00:08 -- setup/common.sh@32 -- # continue 00:02:54.274 04:00:08 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.274 04:00:08 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.274 04:00:08 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:54.274 04:00:08 -- setup/common.sh@32 -- # continue 00:02:54.274 04:00:08 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.274 04:00:08 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.274 04:00:08 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:54.274 04:00:08 -- setup/common.sh@32 -- # continue 00:02:54.274 04:00:08 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.274 04:00:08 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.274 04:00:08 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:54.274 04:00:08 -- setup/common.sh@32 -- # continue 00:02:54.274 04:00:08 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.274 04:00:08 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.274 04:00:08 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:54.274 04:00:08 -- setup/common.sh@32 -- # continue 00:02:54.274 04:00:08 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.274 04:00:08 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.274 04:00:08 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:54.274 04:00:08 -- setup/common.sh@32 -- # continue 00:02:54.274 04:00:08 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.274 04:00:08 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.274 04:00:08 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:54.274 04:00:08 -- setup/common.sh@32 -- # continue 00:02:54.274 04:00:08 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.274 04:00:08 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.274 04:00:08 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:54.274 04:00:08 -- setup/common.sh@32 -- # continue 00:02:54.274 04:00:08 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.274 04:00:08 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.274 04:00:08 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:54.274 04:00:08 -- setup/common.sh@32 -- # continue 00:02:54.274 04:00:08 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.274 04:00:08 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.274 04:00:08 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:54.274 04:00:08 -- setup/common.sh@32 -- # continue 00:02:54.274 04:00:08 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.274 04:00:08 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.274 04:00:08 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:54.274 04:00:08 -- setup/common.sh@32 -- # continue 00:02:54.274 04:00:08 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.274 04:00:08 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.274 04:00:08 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:54.274 04:00:08 -- setup/common.sh@32 -- # continue 00:02:54.274 04:00:08 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.274 04:00:08 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.274 04:00:08 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:54.274 04:00:08 -- setup/common.sh@32 -- # continue 00:02:54.274 04:00:08 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.274 04:00:08 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.274 04:00:08 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:54.274 04:00:08 -- setup/common.sh@32 -- # continue 00:02:54.274 04:00:08 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.274 04:00:08 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.274 04:00:08 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:54.274 04:00:08 -- setup/common.sh@32 -- # continue 00:02:54.274 04:00:08 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.274 04:00:08 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.274 04:00:08 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:54.275 04:00:08 -- setup/common.sh@32 -- # continue 00:02:54.275 04:00:08 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.275 04:00:08 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.275 04:00:08 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:54.275 04:00:08 -- setup/common.sh@32 -- # continue 00:02:54.275 04:00:08 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.275 04:00:08 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.275 04:00:08 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:54.275 04:00:08 -- setup/common.sh@32 -- # continue 00:02:54.275 04:00:08 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.275 04:00:08 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.275 04:00:08 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:54.275 04:00:08 -- setup/common.sh@32 -- # continue 00:02:54.275 04:00:08 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.275 04:00:08 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.275 04:00:08 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:54.275 04:00:08 -- setup/common.sh@32 -- # continue 00:02:54.275 04:00:08 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.275 04:00:08 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.275 04:00:08 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:54.275 04:00:08 -- setup/common.sh@32 -- # continue 00:02:54.275 04:00:08 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.275 04:00:08 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.275 04:00:08 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:54.275 04:00:08 -- setup/common.sh@32 -- # continue 00:02:54.275 04:00:08 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.275 04:00:08 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.275 04:00:08 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:54.275 04:00:08 -- setup/common.sh@32 -- # continue 00:02:54.275 04:00:08 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.275 04:00:08 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.275 04:00:08 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:54.275 04:00:08 -- setup/common.sh@32 -- # continue 00:02:54.275 04:00:08 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.275 04:00:08 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.275 04:00:08 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:54.275 04:00:08 -- setup/common.sh@32 -- # continue 00:02:54.275 04:00:08 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.275 04:00:08 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.275 04:00:08 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:54.275 04:00:08 -- setup/common.sh@32 -- # continue 00:02:54.275 04:00:08 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.275 04:00:08 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.275 04:00:08 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:54.275 04:00:08 -- setup/common.sh@32 -- # continue 00:02:54.275 04:00:08 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.275 04:00:08 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.275 04:00:08 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:54.275 04:00:08 -- setup/common.sh@32 -- # continue 00:02:54.275 04:00:08 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.275 04:00:08 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.275 04:00:08 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:54.275 04:00:08 -- setup/common.sh@32 -- # continue 00:02:54.275 04:00:08 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.275 04:00:08 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.275 04:00:08 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:54.275 04:00:08 -- setup/common.sh@33 -- # echo 0 00:02:54.275 04:00:08 -- setup/common.sh@33 -- # return 0 00:02:54.275 04:00:08 -- setup/hugepages.sh@100 -- # resv=0 00:02:54.275 04:00:08 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:02:54.275 nr_hugepages=1024 00:02:54.275 04:00:08 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:02:54.275 resv_hugepages=0 00:02:54.275 04:00:08 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:02:54.275 surplus_hugepages=0 00:02:54.275 04:00:08 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:02:54.275 anon_hugepages=0 00:02:54.275 04:00:08 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:02:54.275 04:00:08 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:02:54.275 04:00:08 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:02:54.275 04:00:08 -- setup/common.sh@17 -- # local get=HugePages_Total 00:02:54.275 04:00:08 -- setup/common.sh@18 -- # local node= 00:02:54.275 04:00:08 -- setup/common.sh@19 -- # local var val 00:02:54.275 04:00:08 -- setup/common.sh@20 -- # local mem_f mem 00:02:54.275 04:00:08 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:54.275 04:00:08 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:54.275 04:00:08 -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:54.275 04:00:08 -- setup/common.sh@28 -- # mapfile -t mem 00:02:54.275 04:00:08 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:54.275 04:00:08 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.275 04:00:08 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.275 04:00:08 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 258558476 kB' 'MemFree: 241603000 kB' 'MemAvailable: 245239844 kB' 'Buffers: 2696 kB' 'Cached: 10893600 kB' 'SwapCached: 0 kB' 'Active: 7032760 kB' 'Inactive: 4390304 kB' 'Active(anon): 6461944 kB' 'Inactive(anon): 0 kB' 'Active(file): 570816 kB' 'Inactive(file): 4390304 kB' 'Unevictable: 9000 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 535944 kB' 'Mapped: 210756 kB' 'Shmem: 5935176 kB' 'KReclaimable: 309552 kB' 'Slab: 954496 kB' 'SReclaimable: 309552 kB' 'SUnreclaim: 644944 kB' 'KernelStack: 24464 kB' 'PageTables: 8456 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 136619264 kB' 'Committed_AS: 8054840 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 329508 kB' 'VmallocChunk: 0 kB' 'Percpu: 104448 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3262528 kB' 'DirectMap2M: 16437248 kB' 'DirectMap1G: 250609664 kB' 00:02:54.275 04:00:08 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:54.275 04:00:08 -- setup/common.sh@32 -- # continue 00:02:54.275 04:00:08 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.275 04:00:08 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.275 04:00:08 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:54.275 04:00:08 -- setup/common.sh@32 -- # continue 00:02:54.275 04:00:08 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.275 04:00:08 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.275 04:00:08 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:54.275 04:00:08 -- setup/common.sh@32 -- # continue 00:02:54.275 04:00:08 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.275 04:00:08 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.275 04:00:08 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:54.275 04:00:08 -- setup/common.sh@32 -- # continue 00:02:54.275 04:00:08 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.275 04:00:08 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.275 04:00:08 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:54.275 04:00:08 -- setup/common.sh@32 -- # continue 00:02:54.275 04:00:08 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.275 04:00:08 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.275 04:00:08 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:54.275 04:00:08 -- setup/common.sh@32 -- # continue 00:02:54.275 04:00:08 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.276 04:00:08 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.276 04:00:08 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:54.276 04:00:08 -- setup/common.sh@32 -- # continue 00:02:54.276 04:00:08 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.276 04:00:08 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.276 04:00:08 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:54.276 04:00:08 -- setup/common.sh@32 -- # continue 00:02:54.276 04:00:08 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.276 04:00:08 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.276 04:00:08 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:54.276 04:00:08 -- setup/common.sh@32 -- # continue 00:02:54.276 04:00:08 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.276 04:00:08 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.276 04:00:08 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:54.276 04:00:08 -- setup/common.sh@32 -- # continue 00:02:54.276 04:00:08 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.276 04:00:08 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.276 04:00:08 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:54.276 04:00:08 -- setup/common.sh@32 -- # continue 00:02:54.276 04:00:08 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.276 04:00:08 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.276 04:00:08 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:54.276 04:00:08 -- setup/common.sh@32 -- # continue 00:02:54.276 04:00:08 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.276 04:00:08 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.276 04:00:08 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:54.276 04:00:08 -- setup/common.sh@32 -- # continue 00:02:54.276 04:00:08 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.276 04:00:08 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.276 04:00:08 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:54.276 04:00:08 -- setup/common.sh@32 -- # continue 00:02:54.276 04:00:08 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.276 04:00:08 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.276 04:00:08 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:54.276 04:00:08 -- setup/common.sh@32 -- # continue 00:02:54.276 04:00:08 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.276 04:00:08 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.276 04:00:08 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:54.276 04:00:08 -- setup/common.sh@32 -- # continue 00:02:54.276 04:00:08 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.276 04:00:08 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.276 04:00:08 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:54.276 04:00:08 -- setup/common.sh@32 -- # continue 00:02:54.276 04:00:08 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.276 04:00:08 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.276 04:00:08 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:54.276 04:00:08 -- setup/common.sh@32 -- # continue 00:02:54.276 04:00:08 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.276 04:00:08 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.276 04:00:08 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:54.276 04:00:08 -- setup/common.sh@32 -- # continue 00:02:54.276 04:00:08 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.276 04:00:08 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.276 04:00:08 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:54.276 04:00:08 -- setup/common.sh@32 -- # continue 00:02:54.276 04:00:08 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.276 04:00:08 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.276 04:00:08 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:54.276 04:00:08 -- setup/common.sh@32 -- # continue 00:02:54.276 04:00:08 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.276 04:00:08 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.276 04:00:08 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:54.276 04:00:08 -- setup/common.sh@32 -- # continue 00:02:54.276 04:00:08 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.276 04:00:08 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.276 04:00:08 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:54.276 04:00:08 -- setup/common.sh@32 -- # continue 00:02:54.276 04:00:08 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.276 04:00:08 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.276 04:00:08 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:54.276 04:00:08 -- setup/common.sh@32 -- # continue 00:02:54.276 04:00:08 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.276 04:00:08 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.276 04:00:08 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:54.276 04:00:08 -- setup/common.sh@32 -- # continue 00:02:54.276 04:00:08 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.276 04:00:08 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.276 04:00:08 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:54.276 04:00:08 -- setup/common.sh@32 -- # continue 00:02:54.276 04:00:08 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.276 04:00:08 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.276 04:00:08 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:54.276 04:00:08 -- setup/common.sh@32 -- # continue 00:02:54.276 04:00:08 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.276 04:00:08 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.276 04:00:08 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:54.276 04:00:08 -- setup/common.sh@32 -- # continue 00:02:54.276 04:00:08 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.276 04:00:08 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.276 04:00:08 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:54.276 04:00:08 -- setup/common.sh@32 -- # continue 00:02:54.276 04:00:08 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.276 04:00:08 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.276 04:00:08 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:54.276 04:00:08 -- setup/common.sh@32 -- # continue 00:02:54.276 04:00:08 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.276 04:00:08 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.276 04:00:08 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:54.276 04:00:08 -- setup/common.sh@32 -- # continue 00:02:54.276 04:00:08 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.276 04:00:08 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.276 04:00:08 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:54.276 04:00:08 -- setup/common.sh@32 -- # continue 00:02:54.276 04:00:08 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.276 04:00:08 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.276 04:00:08 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:54.276 04:00:08 -- setup/common.sh@32 -- # continue 00:02:54.276 04:00:08 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.276 04:00:08 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.276 04:00:08 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:54.276 04:00:08 -- setup/common.sh@32 -- # continue 00:02:54.276 04:00:08 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.276 04:00:08 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.276 04:00:08 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:54.276 04:00:08 -- setup/common.sh@32 -- # continue 00:02:54.276 04:00:08 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.276 04:00:08 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.276 04:00:08 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:54.276 04:00:08 -- setup/common.sh@32 -- # continue 00:02:54.276 04:00:08 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.276 04:00:08 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.276 04:00:08 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:54.276 04:00:08 -- setup/common.sh@32 -- # continue 00:02:54.276 04:00:08 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.276 04:00:08 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.276 04:00:08 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:54.276 04:00:08 -- setup/common.sh@32 -- # continue 00:02:54.276 04:00:08 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.276 04:00:08 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.276 04:00:08 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:54.276 04:00:08 -- setup/common.sh@32 -- # continue 00:02:54.276 04:00:08 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.276 04:00:08 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.276 04:00:08 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:54.276 04:00:08 -- setup/common.sh@32 -- # continue 00:02:54.276 04:00:08 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.276 04:00:08 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.276 04:00:08 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:54.276 04:00:08 -- setup/common.sh@32 -- # continue 00:02:54.276 04:00:08 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.276 04:00:08 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.276 04:00:08 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:54.276 04:00:08 -- setup/common.sh@32 -- # continue 00:02:54.276 04:00:08 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.276 04:00:08 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.276 04:00:08 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:54.276 04:00:08 -- setup/common.sh@32 -- # continue 00:02:54.276 04:00:08 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.276 04:00:08 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.276 04:00:08 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:54.276 04:00:08 -- setup/common.sh@32 -- # continue 00:02:54.276 04:00:08 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.276 04:00:08 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.276 04:00:08 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:54.276 04:00:08 -- setup/common.sh@32 -- # continue 00:02:54.276 04:00:08 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.276 04:00:08 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.277 04:00:08 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:54.277 04:00:08 -- setup/common.sh@32 -- # continue 00:02:54.277 04:00:08 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.277 04:00:08 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.277 04:00:08 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:54.277 04:00:08 -- setup/common.sh@32 -- # continue 00:02:54.277 04:00:08 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.277 04:00:08 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.277 04:00:08 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:54.277 04:00:08 -- setup/common.sh@32 -- # continue 00:02:54.277 04:00:08 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.277 04:00:08 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.277 04:00:08 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:54.277 04:00:08 -- setup/common.sh@33 -- # echo 1024 00:02:54.277 04:00:08 -- setup/common.sh@33 -- # return 0 00:02:54.277 04:00:08 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:02:54.277 04:00:08 -- setup/hugepages.sh@112 -- # get_nodes 00:02:54.277 04:00:08 -- setup/hugepages.sh@27 -- # local node 00:02:54.277 04:00:08 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:54.277 04:00:08 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:02:54.277 04:00:08 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:54.277 04:00:08 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:02:54.277 04:00:08 -- setup/hugepages.sh@32 -- # no_nodes=2 00:02:54.277 04:00:08 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:02:54.277 04:00:08 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:02:54.277 04:00:08 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:02:54.277 04:00:08 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:02:54.277 04:00:08 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:54.277 04:00:08 -- setup/common.sh@18 -- # local node=0 00:02:54.277 04:00:08 -- setup/common.sh@19 -- # local var val 00:02:54.277 04:00:08 -- setup/common.sh@20 -- # local mem_f mem 00:02:54.277 04:00:08 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:54.277 04:00:08 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:02:54.277 04:00:08 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:02:54.277 04:00:08 -- setup/common.sh@28 -- # mapfile -t mem 00:02:54.277 04:00:08 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:54.277 04:00:08 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.277 04:00:08 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.277 04:00:08 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 131816228 kB' 'MemFree: 118479252 kB' 'MemUsed: 13336976 kB' 'SwapCached: 0 kB' 'Active: 5625016 kB' 'Inactive: 3992136 kB' 'Active(anon): 5212140 kB' 'Inactive(anon): 0 kB' 'Active(file): 412876 kB' 'Inactive(file): 3992136 kB' 'Unevictable: 9000 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 9296044 kB' 'Mapped: 162592 kB' 'AnonPages: 330232 kB' 'Shmem: 4891032 kB' 'KernelStack: 13688 kB' 'PageTables: 6092 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 211376 kB' 'Slab: 548872 kB' 'SReclaimable: 211376 kB' 'SUnreclaim: 337496 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:02:54.277 04:00:08 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.277 04:00:08 -- setup/common.sh@32 -- # continue 00:02:54.277 04:00:08 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.277 04:00:08 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.277 04:00:08 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.277 04:00:08 -- setup/common.sh@32 -- # continue 00:02:54.277 04:00:08 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.277 04:00:08 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.277 04:00:08 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.277 04:00:08 -- setup/common.sh@32 -- # continue 00:02:54.277 04:00:08 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.277 04:00:08 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.277 04:00:08 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.277 04:00:08 -- setup/common.sh@32 -- # continue 00:02:54.277 04:00:08 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.277 04:00:08 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.277 04:00:08 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.277 04:00:08 -- setup/common.sh@32 -- # continue 00:02:54.277 04:00:08 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.277 04:00:08 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.277 04:00:08 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.277 04:00:08 -- setup/common.sh@32 -- # continue 00:02:54.277 04:00:08 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.277 04:00:08 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.277 04:00:08 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.277 04:00:08 -- setup/common.sh@32 -- # continue 00:02:54.277 04:00:08 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.277 04:00:08 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.277 04:00:08 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.277 04:00:08 -- setup/common.sh@32 -- # continue 00:02:54.277 04:00:08 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.277 04:00:08 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.277 04:00:08 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.277 04:00:08 -- setup/common.sh@32 -- # continue 00:02:54.277 04:00:08 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.277 04:00:08 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.277 04:00:08 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.277 04:00:08 -- setup/common.sh@32 -- # continue 00:02:54.277 04:00:08 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.277 04:00:08 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.277 04:00:08 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.277 04:00:08 -- setup/common.sh@32 -- # continue 00:02:54.277 04:00:08 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.277 04:00:08 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.277 04:00:08 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.277 04:00:08 -- setup/common.sh@32 -- # continue 00:02:54.277 04:00:08 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.277 04:00:08 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.277 04:00:08 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.277 04:00:08 -- setup/common.sh@32 -- # continue 00:02:54.277 04:00:08 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.277 04:00:08 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.277 04:00:08 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.277 04:00:08 -- setup/common.sh@32 -- # continue 00:02:54.277 04:00:08 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.277 04:00:08 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.277 04:00:08 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.277 04:00:08 -- setup/common.sh@32 -- # continue 00:02:54.277 04:00:08 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.277 04:00:08 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.277 04:00:08 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.277 04:00:08 -- setup/common.sh@32 -- # continue 00:02:54.277 04:00:08 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.277 04:00:08 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.277 04:00:08 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.277 04:00:08 -- setup/common.sh@32 -- # continue 00:02:54.277 04:00:08 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.277 04:00:08 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.277 04:00:08 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.277 04:00:08 -- setup/common.sh@32 -- # continue 00:02:54.277 04:00:08 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.277 04:00:08 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.277 04:00:08 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.277 04:00:08 -- setup/common.sh@32 -- # continue 00:02:54.277 04:00:08 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.277 04:00:08 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.277 04:00:08 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.277 04:00:08 -- setup/common.sh@32 -- # continue 00:02:54.277 04:00:08 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.277 04:00:08 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.277 04:00:08 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.277 04:00:08 -- setup/common.sh@32 -- # continue 00:02:54.277 04:00:08 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.277 04:00:08 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.277 04:00:08 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.277 04:00:08 -- setup/common.sh@32 -- # continue 00:02:54.277 04:00:08 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.277 04:00:08 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.277 04:00:08 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.277 04:00:08 -- setup/common.sh@32 -- # continue 00:02:54.277 04:00:08 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.277 04:00:08 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.277 04:00:08 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.277 04:00:08 -- setup/common.sh@32 -- # continue 00:02:54.277 04:00:08 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.277 04:00:08 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.277 04:00:08 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.277 04:00:08 -- setup/common.sh@32 -- # continue 00:02:54.277 04:00:08 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.277 04:00:08 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.277 04:00:08 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.277 04:00:08 -- setup/common.sh@32 -- # continue 00:02:54.277 04:00:08 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.277 04:00:08 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.277 04:00:08 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.277 04:00:08 -- setup/common.sh@32 -- # continue 00:02:54.277 04:00:08 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.277 04:00:08 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.277 04:00:08 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.277 04:00:08 -- setup/common.sh@32 -- # continue 00:02:54.277 04:00:08 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.277 04:00:08 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.277 04:00:08 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.277 04:00:08 -- setup/common.sh@32 -- # continue 00:02:54.277 04:00:08 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.278 04:00:08 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.278 04:00:08 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.278 04:00:08 -- setup/common.sh@32 -- # continue 00:02:54.278 04:00:08 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.278 04:00:08 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.278 04:00:08 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.278 04:00:08 -- setup/common.sh@32 -- # continue 00:02:54.278 04:00:08 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.278 04:00:08 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.278 04:00:08 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.278 04:00:08 -- setup/common.sh@32 -- # continue 00:02:54.278 04:00:08 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.278 04:00:08 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.278 04:00:08 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.278 04:00:08 -- setup/common.sh@32 -- # continue 00:02:54.278 04:00:08 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.278 04:00:08 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.278 04:00:08 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.278 04:00:08 -- setup/common.sh@32 -- # continue 00:02:54.278 04:00:08 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.278 04:00:08 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.278 04:00:08 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.278 04:00:08 -- setup/common.sh@32 -- # continue 00:02:54.278 04:00:08 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.278 04:00:08 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.278 04:00:08 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.278 04:00:08 -- setup/common.sh@32 -- # continue 00:02:54.278 04:00:08 -- setup/common.sh@31 -- # IFS=': ' 00:02:54.278 04:00:08 -- setup/common.sh@31 -- # read -r var val _ 00:02:54.278 04:00:08 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.278 04:00:08 -- setup/common.sh@33 -- # echo 0 00:02:54.278 04:00:08 -- setup/common.sh@33 -- # return 0 00:02:54.278 04:00:08 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:02:54.278 04:00:08 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:02:54.278 04:00:08 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:02:54.278 04:00:08 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:02:54.278 04:00:08 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:02:54.278 node0=1024 expecting 1024 00:02:54.278 04:00:08 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:02:54.278 04:00:08 -- setup/hugepages.sh@202 -- # CLEAR_HUGE=no 00:02:54.278 04:00:08 -- setup/hugepages.sh@202 -- # NRHUGE=512 00:02:54.278 04:00:08 -- setup/hugepages.sh@202 -- # setup output 00:02:54.278 04:00:08 -- setup/common.sh@9 -- # [[ output == output ]] 00:02:54.278 04:00:08 -- setup/common.sh@10 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/setup.sh 00:02:56.846 0000:74:02.0 (8086 0cfe): Already using the vfio-pci driver 00:02:56.846 0000:c9:00.0 (8086 0a54): Already using the vfio-pci driver 00:02:56.846 0000:f1:02.0 (8086 0cfe): Already using the vfio-pci driver 00:02:56.846 0000:79:02.0 (8086 0cfe): Already using the vfio-pci driver 00:02:56.846 0000:6f:01.0 (8086 0b25): Already using the vfio-pci driver 00:02:56.846 0000:6f:02.0 (8086 0cfe): Already using the vfio-pci driver 00:02:56.846 0000:f6:01.0 (8086 0b25): Already using the vfio-pci driver 00:02:56.846 0000:f6:02.0 (8086 0cfe): Already using the vfio-pci driver 00:02:56.846 0000:74:01.0 (8086 0b25): Already using the vfio-pci driver 00:02:56.846 0000:6a:02.0 (8086 0cfe): Already using the vfio-pci driver 00:02:56.846 0000:79:01.0 (8086 0b25): Already using the vfio-pci driver 00:02:56.846 0000:ec:01.0 (8086 0b25): Already using the vfio-pci driver 00:02:56.846 0000:6a:01.0 (8086 0b25): Already using the vfio-pci driver 00:02:56.846 0000:ec:02.0 (8086 0cfe): Already using the vfio-pci driver 00:02:56.846 0000:ca:00.0 (8086 0a54): Already using the vfio-pci driver 00:02:56.846 0000:e7:01.0 (8086 0b25): Already using the vfio-pci driver 00:02:56.846 0000:e7:02.0 (8086 0cfe): Already using the vfio-pci driver 00:02:56.846 0000:f1:01.0 (8086 0b25): Already using the vfio-pci driver 00:02:56.846 INFO: Requested 512 hugepages but 1024 already allocated on node0 00:02:56.846 04:00:11 -- setup/hugepages.sh@204 -- # verify_nr_hugepages 00:02:56.846 04:00:11 -- setup/hugepages.sh@89 -- # local node 00:02:56.846 04:00:11 -- setup/hugepages.sh@90 -- # local sorted_t 00:02:56.846 04:00:11 -- setup/hugepages.sh@91 -- # local sorted_s 00:02:56.846 04:00:11 -- setup/hugepages.sh@92 -- # local surp 00:02:56.846 04:00:11 -- setup/hugepages.sh@93 -- # local resv 00:02:56.846 04:00:11 -- setup/hugepages.sh@94 -- # local anon 00:02:56.846 04:00:11 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:02:56.846 04:00:11 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:02:56.846 04:00:11 -- setup/common.sh@17 -- # local get=AnonHugePages 00:02:56.846 04:00:11 -- setup/common.sh@18 -- # local node= 00:02:56.846 04:00:11 -- setup/common.sh@19 -- # local var val 00:02:56.846 04:00:11 -- setup/common.sh@20 -- # local mem_f mem 00:02:56.846 04:00:11 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:56.846 04:00:11 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:56.846 04:00:11 -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:56.846 04:00:11 -- setup/common.sh@28 -- # mapfile -t mem 00:02:56.846 04:00:11 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:56.846 04:00:11 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.847 04:00:11 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.847 04:00:11 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 258558476 kB' 'MemFree: 241603740 kB' 'MemAvailable: 245240584 kB' 'Buffers: 2696 kB' 'Cached: 10893680 kB' 'SwapCached: 0 kB' 'Active: 7034444 kB' 'Inactive: 4390304 kB' 'Active(anon): 6463628 kB' 'Inactive(anon): 0 kB' 'Active(file): 570816 kB' 'Inactive(file): 4390304 kB' 'Unevictable: 9000 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 537556 kB' 'Mapped: 210888 kB' 'Shmem: 5935256 kB' 'KReclaimable: 309552 kB' 'Slab: 955664 kB' 'SReclaimable: 309552 kB' 'SUnreclaim: 646112 kB' 'KernelStack: 24912 kB' 'PageTables: 9080 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 136619264 kB' 'Committed_AS: 8060208 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 329796 kB' 'VmallocChunk: 0 kB' 'Percpu: 104448 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3262528 kB' 'DirectMap2M: 16437248 kB' 'DirectMap1G: 250609664 kB' 00:02:56.847 04:00:11 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:56.847 04:00:11 -- setup/common.sh@32 -- # continue 00:02:56.847 04:00:11 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.847 04:00:11 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.847 04:00:11 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:56.847 04:00:11 -- setup/common.sh@32 -- # continue 00:02:56.847 04:00:11 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.847 04:00:11 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.847 04:00:11 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:56.847 04:00:11 -- setup/common.sh@32 -- # continue 00:02:56.847 04:00:11 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.847 04:00:11 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.847 04:00:11 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:56.847 04:00:11 -- setup/common.sh@32 -- # continue 00:02:56.847 04:00:11 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.847 04:00:11 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.847 04:00:11 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:56.847 04:00:11 -- setup/common.sh@32 -- # continue 00:02:56.847 04:00:11 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.847 04:00:11 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.847 04:00:11 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:56.847 04:00:11 -- setup/common.sh@32 -- # continue 00:02:56.847 04:00:11 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.847 04:00:11 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.847 04:00:11 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:56.847 04:00:11 -- setup/common.sh@32 -- # continue 00:02:56.847 04:00:11 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.847 04:00:11 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.847 04:00:11 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:56.847 04:00:11 -- setup/common.sh@32 -- # continue 00:02:56.847 04:00:11 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.847 04:00:11 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.847 04:00:11 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:56.847 04:00:11 -- setup/common.sh@32 -- # continue 00:02:56.847 04:00:11 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.847 04:00:11 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.847 04:00:11 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:56.847 04:00:11 -- setup/common.sh@32 -- # continue 00:02:56.847 04:00:11 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.847 04:00:11 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.847 04:00:11 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:56.847 04:00:11 -- setup/common.sh@32 -- # continue 00:02:56.847 04:00:11 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.847 04:00:11 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.847 04:00:11 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:56.847 04:00:11 -- setup/common.sh@32 -- # continue 00:02:56.847 04:00:11 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.847 04:00:11 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.847 04:00:11 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:56.847 04:00:11 -- setup/common.sh@32 -- # continue 00:02:56.847 04:00:11 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.847 04:00:11 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.847 04:00:11 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:56.847 04:00:11 -- setup/common.sh@32 -- # continue 00:02:56.847 04:00:11 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.847 04:00:11 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.847 04:00:11 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:56.847 04:00:11 -- setup/common.sh@32 -- # continue 00:02:56.847 04:00:11 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.847 04:00:11 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.847 04:00:11 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:56.847 04:00:11 -- setup/common.sh@32 -- # continue 00:02:56.847 04:00:11 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.847 04:00:11 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.847 04:00:11 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:56.847 04:00:11 -- setup/common.sh@32 -- # continue 00:02:56.847 04:00:11 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.847 04:00:11 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.847 04:00:11 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:56.847 04:00:11 -- setup/common.sh@32 -- # continue 00:02:56.847 04:00:11 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.847 04:00:11 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.847 04:00:11 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:56.847 04:00:11 -- setup/common.sh@32 -- # continue 00:02:56.847 04:00:11 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.847 04:00:11 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.847 04:00:11 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:56.847 04:00:11 -- setup/common.sh@32 -- # continue 00:02:56.847 04:00:11 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.847 04:00:11 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.847 04:00:11 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:56.847 04:00:11 -- setup/common.sh@32 -- # continue 00:02:56.847 04:00:11 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.847 04:00:11 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.847 04:00:11 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:56.847 04:00:11 -- setup/common.sh@32 -- # continue 00:02:56.847 04:00:11 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.847 04:00:11 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.847 04:00:11 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:56.847 04:00:11 -- setup/common.sh@32 -- # continue 00:02:56.847 04:00:11 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.847 04:00:11 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.847 04:00:11 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:56.847 04:00:11 -- setup/common.sh@32 -- # continue 00:02:56.847 04:00:11 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.847 04:00:11 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.847 04:00:11 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:56.847 04:00:11 -- setup/common.sh@32 -- # continue 00:02:56.847 04:00:11 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.847 04:00:11 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.847 04:00:11 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:56.847 04:00:11 -- setup/common.sh@32 -- # continue 00:02:56.847 04:00:11 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.847 04:00:11 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.847 04:00:11 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:56.847 04:00:11 -- setup/common.sh@32 -- # continue 00:02:56.847 04:00:11 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.847 04:00:11 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.847 04:00:11 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:56.847 04:00:11 -- setup/common.sh@32 -- # continue 00:02:56.847 04:00:11 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.847 04:00:11 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.847 04:00:11 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:56.847 04:00:11 -- setup/common.sh@32 -- # continue 00:02:56.847 04:00:11 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.847 04:00:11 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.847 04:00:11 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:56.847 04:00:11 -- setup/common.sh@32 -- # continue 00:02:56.847 04:00:11 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.847 04:00:11 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.848 04:00:11 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:56.848 04:00:11 -- setup/common.sh@32 -- # continue 00:02:56.848 04:00:11 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.848 04:00:11 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.848 04:00:11 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:56.848 04:00:11 -- setup/common.sh@32 -- # continue 00:02:56.848 04:00:11 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.848 04:00:11 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.848 04:00:11 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:56.848 04:00:11 -- setup/common.sh@32 -- # continue 00:02:56.848 04:00:11 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.848 04:00:11 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.848 04:00:11 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:56.848 04:00:11 -- setup/common.sh@32 -- # continue 00:02:56.848 04:00:11 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.848 04:00:11 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.848 04:00:11 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:56.848 04:00:11 -- setup/common.sh@32 -- # continue 00:02:56.848 04:00:11 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.848 04:00:11 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.848 04:00:11 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:56.848 04:00:11 -- setup/common.sh@32 -- # continue 00:02:56.848 04:00:11 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.848 04:00:11 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.848 04:00:11 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:56.848 04:00:11 -- setup/common.sh@32 -- # continue 00:02:56.848 04:00:11 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.848 04:00:11 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.848 04:00:11 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:56.848 04:00:11 -- setup/common.sh@32 -- # continue 00:02:56.848 04:00:11 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.848 04:00:11 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.848 04:00:11 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:56.848 04:00:11 -- setup/common.sh@32 -- # continue 00:02:56.848 04:00:11 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.848 04:00:11 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.848 04:00:11 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:56.848 04:00:11 -- setup/common.sh@32 -- # continue 00:02:56.848 04:00:11 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.848 04:00:11 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.848 04:00:11 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:56.848 04:00:11 -- setup/common.sh@33 -- # echo 0 00:02:56.848 04:00:11 -- setup/common.sh@33 -- # return 0 00:02:56.848 04:00:11 -- setup/hugepages.sh@97 -- # anon=0 00:02:56.848 04:00:11 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:02:56.848 04:00:11 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:56.848 04:00:11 -- setup/common.sh@18 -- # local node= 00:02:56.848 04:00:11 -- setup/common.sh@19 -- # local var val 00:02:56.848 04:00:11 -- setup/common.sh@20 -- # local mem_f mem 00:02:56.848 04:00:11 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:56.848 04:00:11 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:56.848 04:00:11 -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:56.848 04:00:11 -- setup/common.sh@28 -- # mapfile -t mem 00:02:56.848 04:00:11 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:56.848 04:00:11 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.848 04:00:11 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.848 04:00:11 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 258558476 kB' 'MemFree: 241607136 kB' 'MemAvailable: 245243980 kB' 'Buffers: 2696 kB' 'Cached: 10893684 kB' 'SwapCached: 0 kB' 'Active: 7035780 kB' 'Inactive: 4390304 kB' 'Active(anon): 6464964 kB' 'Inactive(anon): 0 kB' 'Active(file): 570816 kB' 'Inactive(file): 4390304 kB' 'Unevictable: 9000 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 538960 kB' 'Mapped: 210828 kB' 'Shmem: 5935260 kB' 'KReclaimable: 309552 kB' 'Slab: 955656 kB' 'SReclaimable: 309552 kB' 'SUnreclaim: 646104 kB' 'KernelStack: 24928 kB' 'PageTables: 9420 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 136619264 kB' 'Committed_AS: 8060592 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 329796 kB' 'VmallocChunk: 0 kB' 'Percpu: 104448 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3262528 kB' 'DirectMap2M: 16437248 kB' 'DirectMap1G: 250609664 kB' 00:02:56.848 04:00:11 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.848 04:00:11 -- setup/common.sh@32 -- # continue 00:02:56.848 04:00:11 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.848 04:00:11 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.848 04:00:11 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.848 04:00:11 -- setup/common.sh@32 -- # continue 00:02:56.848 04:00:11 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.848 04:00:11 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.848 04:00:11 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.848 04:00:11 -- setup/common.sh@32 -- # continue 00:02:56.848 04:00:11 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.848 04:00:11 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.848 04:00:11 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.848 04:00:11 -- setup/common.sh@32 -- # continue 00:02:56.848 04:00:11 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.848 04:00:11 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.848 04:00:11 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.848 04:00:11 -- setup/common.sh@32 -- # continue 00:02:56.848 04:00:11 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.848 04:00:11 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.848 04:00:11 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.848 04:00:11 -- setup/common.sh@32 -- # continue 00:02:56.848 04:00:11 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.848 04:00:11 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.848 04:00:11 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.848 04:00:11 -- setup/common.sh@32 -- # continue 00:02:56.848 04:00:11 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.848 04:00:11 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.848 04:00:11 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.848 04:00:11 -- setup/common.sh@32 -- # continue 00:02:56.848 04:00:11 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.848 04:00:11 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.848 04:00:11 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.848 04:00:11 -- setup/common.sh@32 -- # continue 00:02:56.848 04:00:11 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.848 04:00:11 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.848 04:00:11 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.848 04:00:11 -- setup/common.sh@32 -- # continue 00:02:56.848 04:00:11 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.848 04:00:11 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.848 04:00:11 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.848 04:00:11 -- setup/common.sh@32 -- # continue 00:02:56.848 04:00:11 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.848 04:00:11 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.848 04:00:11 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.848 04:00:11 -- setup/common.sh@32 -- # continue 00:02:56.848 04:00:11 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.848 04:00:11 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.848 04:00:11 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.848 04:00:11 -- setup/common.sh@32 -- # continue 00:02:56.848 04:00:11 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.848 04:00:11 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.848 04:00:11 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.848 04:00:11 -- setup/common.sh@32 -- # continue 00:02:56.848 04:00:11 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.848 04:00:11 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.848 04:00:11 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.848 04:00:11 -- setup/common.sh@32 -- # continue 00:02:56.848 04:00:11 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.848 04:00:11 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.848 04:00:11 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.848 04:00:11 -- setup/common.sh@32 -- # continue 00:02:56.848 04:00:11 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.848 04:00:11 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.848 04:00:11 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.848 04:00:11 -- setup/common.sh@32 -- # continue 00:02:56.848 04:00:11 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.848 04:00:11 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.848 04:00:11 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.848 04:00:11 -- setup/common.sh@32 -- # continue 00:02:56.848 04:00:11 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.848 04:00:11 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.848 04:00:11 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.848 04:00:11 -- setup/common.sh@32 -- # continue 00:02:56.848 04:00:11 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.848 04:00:11 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.848 04:00:11 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.848 04:00:11 -- setup/common.sh@32 -- # continue 00:02:56.848 04:00:11 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.848 04:00:11 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.848 04:00:11 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.848 04:00:11 -- setup/common.sh@32 -- # continue 00:02:56.848 04:00:11 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.848 04:00:11 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.849 04:00:11 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.849 04:00:11 -- setup/common.sh@32 -- # continue 00:02:56.849 04:00:11 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.849 04:00:11 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.849 04:00:11 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.849 04:00:11 -- setup/common.sh@32 -- # continue 00:02:56.849 04:00:11 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.849 04:00:11 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.849 04:00:11 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.849 04:00:11 -- setup/common.sh@32 -- # continue 00:02:56.849 04:00:11 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.849 04:00:11 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.849 04:00:11 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.849 04:00:11 -- setup/common.sh@32 -- # continue 00:02:56.849 04:00:11 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.849 04:00:11 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.849 04:00:11 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.849 04:00:11 -- setup/common.sh@32 -- # continue 00:02:56.849 04:00:11 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.849 04:00:11 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.849 04:00:11 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.849 04:00:11 -- setup/common.sh@32 -- # continue 00:02:56.849 04:00:11 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.849 04:00:11 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.849 04:00:11 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.849 04:00:11 -- setup/common.sh@32 -- # continue 00:02:56.849 04:00:11 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.849 04:00:11 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.849 04:00:11 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.849 04:00:11 -- setup/common.sh@32 -- # continue 00:02:56.849 04:00:11 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.849 04:00:11 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.849 04:00:11 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.849 04:00:11 -- setup/common.sh@32 -- # continue 00:02:56.849 04:00:11 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.849 04:00:11 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.849 04:00:11 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.849 04:00:11 -- setup/common.sh@32 -- # continue 00:02:56.849 04:00:11 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.849 04:00:11 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.849 04:00:11 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.849 04:00:11 -- setup/common.sh@32 -- # continue 00:02:56.849 04:00:11 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.849 04:00:11 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.849 04:00:11 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.849 04:00:11 -- setup/common.sh@32 -- # continue 00:02:56.849 04:00:11 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.849 04:00:11 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.849 04:00:11 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.849 04:00:11 -- setup/common.sh@32 -- # continue 00:02:56.849 04:00:11 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.849 04:00:11 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.849 04:00:11 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.849 04:00:11 -- setup/common.sh@32 -- # continue 00:02:56.849 04:00:11 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.849 04:00:11 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.849 04:00:11 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.849 04:00:11 -- setup/common.sh@32 -- # continue 00:02:56.849 04:00:11 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.849 04:00:11 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.849 04:00:11 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.849 04:00:11 -- setup/common.sh@32 -- # continue 00:02:56.849 04:00:11 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.849 04:00:11 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.849 04:00:11 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.849 04:00:11 -- setup/common.sh@32 -- # continue 00:02:56.849 04:00:11 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.849 04:00:11 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.849 04:00:11 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.849 04:00:11 -- setup/common.sh@32 -- # continue 00:02:56.849 04:00:11 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.849 04:00:11 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.849 04:00:11 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.849 04:00:11 -- setup/common.sh@32 -- # continue 00:02:56.849 04:00:11 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.849 04:00:11 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.849 04:00:11 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.849 04:00:11 -- setup/common.sh@32 -- # continue 00:02:56.849 04:00:11 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.849 04:00:11 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.849 04:00:11 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.849 04:00:11 -- setup/common.sh@32 -- # continue 00:02:56.849 04:00:11 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.849 04:00:11 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.849 04:00:11 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.849 04:00:11 -- setup/common.sh@32 -- # continue 00:02:56.849 04:00:11 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.849 04:00:11 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.849 04:00:11 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.849 04:00:11 -- setup/common.sh@32 -- # continue 00:02:56.849 04:00:11 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.849 04:00:11 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.849 04:00:11 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.849 04:00:11 -- setup/common.sh@32 -- # continue 00:02:56.849 04:00:11 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.849 04:00:11 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.849 04:00:11 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.849 04:00:11 -- setup/common.sh@32 -- # continue 00:02:56.849 04:00:11 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.849 04:00:11 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.849 04:00:11 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.849 04:00:11 -- setup/common.sh@32 -- # continue 00:02:56.849 04:00:11 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.849 04:00:11 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.849 04:00:11 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.849 04:00:11 -- setup/common.sh@32 -- # continue 00:02:56.849 04:00:11 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.849 04:00:11 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.849 04:00:11 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.849 04:00:11 -- setup/common.sh@32 -- # continue 00:02:56.849 04:00:11 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.849 04:00:11 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.849 04:00:11 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.849 04:00:11 -- setup/common.sh@32 -- # continue 00:02:56.849 04:00:11 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.849 04:00:11 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.849 04:00:11 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.849 04:00:11 -- setup/common.sh@32 -- # continue 00:02:56.849 04:00:11 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.849 04:00:11 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.849 04:00:11 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.849 04:00:11 -- setup/common.sh@33 -- # echo 0 00:02:56.849 04:00:11 -- setup/common.sh@33 -- # return 0 00:02:56.849 04:00:11 -- setup/hugepages.sh@99 -- # surp=0 00:02:56.849 04:00:11 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:02:56.849 04:00:11 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:02:56.849 04:00:11 -- setup/common.sh@18 -- # local node= 00:02:56.849 04:00:11 -- setup/common.sh@19 -- # local var val 00:02:56.849 04:00:11 -- setup/common.sh@20 -- # local mem_f mem 00:02:56.849 04:00:11 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:56.849 04:00:11 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:56.849 04:00:11 -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:56.849 04:00:11 -- setup/common.sh@28 -- # mapfile -t mem 00:02:56.849 04:00:11 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:56.849 04:00:11 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.849 04:00:11 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.850 04:00:11 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 258558476 kB' 'MemFree: 241618608 kB' 'MemAvailable: 245255452 kB' 'Buffers: 2696 kB' 'Cached: 10893684 kB' 'SwapCached: 0 kB' 'Active: 7034620 kB' 'Inactive: 4390304 kB' 'Active(anon): 6463804 kB' 'Inactive(anon): 0 kB' 'Active(file): 570816 kB' 'Inactive(file): 4390304 kB' 'Unevictable: 9000 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 537740 kB' 'Mapped: 210772 kB' 'Shmem: 5935260 kB' 'KReclaimable: 309552 kB' 'Slab: 955612 kB' 'SReclaimable: 309552 kB' 'SUnreclaim: 646060 kB' 'KernelStack: 24800 kB' 'PageTables: 9400 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 136619264 kB' 'Committed_AS: 8060604 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 329700 kB' 'VmallocChunk: 0 kB' 'Percpu: 104448 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3262528 kB' 'DirectMap2M: 16437248 kB' 'DirectMap1G: 250609664 kB' 00:02:56.850 04:00:11 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.850 04:00:11 -- setup/common.sh@32 -- # continue 00:02:56.850 04:00:11 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.850 04:00:11 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.850 04:00:11 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.850 04:00:11 -- setup/common.sh@32 -- # continue 00:02:56.850 04:00:11 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.850 04:00:11 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.850 04:00:11 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.850 04:00:11 -- setup/common.sh@32 -- # continue 00:02:56.850 04:00:11 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.850 04:00:11 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.850 04:00:11 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.850 04:00:11 -- setup/common.sh@32 -- # continue 00:02:56.850 04:00:11 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.850 04:00:11 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.850 04:00:11 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.850 04:00:11 -- setup/common.sh@32 -- # continue 00:02:56.850 04:00:11 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.850 04:00:11 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.850 04:00:11 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.850 04:00:11 -- setup/common.sh@32 -- # continue 00:02:56.850 04:00:11 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.850 04:00:11 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.850 04:00:11 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.850 04:00:11 -- setup/common.sh@32 -- # continue 00:02:56.850 04:00:11 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.850 04:00:11 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.850 04:00:11 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.850 04:00:11 -- setup/common.sh@32 -- # continue 00:02:56.850 04:00:11 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.850 04:00:11 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.850 04:00:11 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.850 04:00:11 -- setup/common.sh@32 -- # continue 00:02:56.850 04:00:11 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.850 04:00:11 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.850 04:00:11 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.850 04:00:11 -- setup/common.sh@32 -- # continue 00:02:56.850 04:00:11 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.850 04:00:11 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.850 04:00:11 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.850 04:00:11 -- setup/common.sh@32 -- # continue 00:02:56.850 04:00:11 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.850 04:00:11 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.850 04:00:11 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.850 04:00:11 -- setup/common.sh@32 -- # continue 00:02:56.850 04:00:11 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.850 04:00:11 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.850 04:00:11 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.850 04:00:11 -- setup/common.sh@32 -- # continue 00:02:56.850 04:00:11 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.850 04:00:11 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.850 04:00:11 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.850 04:00:11 -- setup/common.sh@32 -- # continue 00:02:56.850 04:00:11 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.850 04:00:11 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.850 04:00:11 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.850 04:00:11 -- setup/common.sh@32 -- # continue 00:02:56.850 04:00:11 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.850 04:00:11 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.850 04:00:11 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.850 04:00:11 -- setup/common.sh@32 -- # continue 00:02:56.850 04:00:11 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.850 04:00:11 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.850 04:00:11 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.850 04:00:11 -- setup/common.sh@32 -- # continue 00:02:56.850 04:00:11 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.850 04:00:11 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.850 04:00:11 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.850 04:00:11 -- setup/common.sh@32 -- # continue 00:02:56.850 04:00:11 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.850 04:00:11 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.850 04:00:11 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.850 04:00:11 -- setup/common.sh@32 -- # continue 00:02:56.850 04:00:11 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.850 04:00:11 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.850 04:00:11 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.850 04:00:11 -- setup/common.sh@32 -- # continue 00:02:56.850 04:00:11 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.850 04:00:11 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.850 04:00:11 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.850 04:00:11 -- setup/common.sh@32 -- # continue 00:02:56.850 04:00:11 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.850 04:00:11 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.850 04:00:11 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.850 04:00:11 -- setup/common.sh@32 -- # continue 00:02:56.850 04:00:11 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.850 04:00:11 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.850 04:00:11 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.850 04:00:11 -- setup/common.sh@32 -- # continue 00:02:56.850 04:00:11 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.850 04:00:11 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.850 04:00:11 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.850 04:00:11 -- setup/common.sh@32 -- # continue 00:02:56.850 04:00:11 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.850 04:00:11 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.850 04:00:11 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.850 04:00:11 -- setup/common.sh@32 -- # continue 00:02:56.850 04:00:11 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.850 04:00:11 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.850 04:00:11 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.850 04:00:11 -- setup/common.sh@32 -- # continue 00:02:56.850 04:00:11 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.850 04:00:11 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.850 04:00:11 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.850 04:00:11 -- setup/common.sh@32 -- # continue 00:02:56.850 04:00:11 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.850 04:00:11 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.850 04:00:11 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.850 04:00:11 -- setup/common.sh@32 -- # continue 00:02:56.850 04:00:11 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.850 04:00:11 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.850 04:00:11 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.850 04:00:11 -- setup/common.sh@32 -- # continue 00:02:56.850 04:00:11 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.850 04:00:11 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.850 04:00:11 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.850 04:00:11 -- setup/common.sh@32 -- # continue 00:02:56.850 04:00:11 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.850 04:00:11 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.850 04:00:11 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.850 04:00:11 -- setup/common.sh@32 -- # continue 00:02:56.850 04:00:11 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.850 04:00:11 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.850 04:00:11 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.850 04:00:11 -- setup/common.sh@32 -- # continue 00:02:56.850 04:00:11 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.851 04:00:11 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.851 04:00:11 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.851 04:00:11 -- setup/common.sh@32 -- # continue 00:02:56.851 04:00:11 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.851 04:00:11 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.851 04:00:11 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.851 04:00:11 -- setup/common.sh@32 -- # continue 00:02:56.851 04:00:11 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.851 04:00:11 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.851 04:00:11 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.851 04:00:11 -- setup/common.sh@32 -- # continue 00:02:56.851 04:00:11 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.851 04:00:11 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.851 04:00:11 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.851 04:00:11 -- setup/common.sh@32 -- # continue 00:02:56.851 04:00:11 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.851 04:00:11 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.851 04:00:11 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.851 04:00:11 -- setup/common.sh@32 -- # continue 00:02:56.851 04:00:11 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.851 04:00:11 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.851 04:00:11 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.851 04:00:11 -- setup/common.sh@32 -- # continue 00:02:56.851 04:00:11 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.851 04:00:11 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.851 04:00:11 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.851 04:00:11 -- setup/common.sh@32 -- # continue 00:02:56.851 04:00:11 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.851 04:00:11 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.851 04:00:11 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.851 04:00:11 -- setup/common.sh@32 -- # continue 00:02:56.851 04:00:11 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.851 04:00:11 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.851 04:00:11 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.851 04:00:11 -- setup/common.sh@32 -- # continue 00:02:56.851 04:00:11 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.851 04:00:11 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.851 04:00:11 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.851 04:00:11 -- setup/common.sh@32 -- # continue 00:02:56.851 04:00:11 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.851 04:00:11 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.851 04:00:11 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.851 04:00:11 -- setup/common.sh@32 -- # continue 00:02:56.851 04:00:11 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.851 04:00:11 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.851 04:00:11 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.851 04:00:11 -- setup/common.sh@32 -- # continue 00:02:56.851 04:00:11 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.851 04:00:11 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.851 04:00:11 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.851 04:00:11 -- setup/common.sh@32 -- # continue 00:02:56.851 04:00:11 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.851 04:00:11 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.851 04:00:11 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.851 04:00:11 -- setup/common.sh@32 -- # continue 00:02:56.851 04:00:11 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.851 04:00:11 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.851 04:00:11 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.851 04:00:11 -- setup/common.sh@32 -- # continue 00:02:56.851 04:00:11 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.851 04:00:11 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.851 04:00:11 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.851 04:00:11 -- setup/common.sh@32 -- # continue 00:02:56.851 04:00:11 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.851 04:00:11 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.851 04:00:11 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.851 04:00:11 -- setup/common.sh@32 -- # continue 00:02:56.851 04:00:11 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.851 04:00:11 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.851 04:00:11 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.851 04:00:11 -- setup/common.sh@32 -- # continue 00:02:56.851 04:00:11 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.851 04:00:11 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.851 04:00:11 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.851 04:00:11 -- setup/common.sh@33 -- # echo 0 00:02:56.851 04:00:11 -- setup/common.sh@33 -- # return 0 00:02:56.851 04:00:11 -- setup/hugepages.sh@100 -- # resv=0 00:02:56.851 04:00:11 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:02:56.851 nr_hugepages=1024 00:02:56.851 04:00:11 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:02:56.851 resv_hugepages=0 00:02:56.851 04:00:11 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:02:56.851 surplus_hugepages=0 00:02:56.851 04:00:11 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:02:56.851 anon_hugepages=0 00:02:56.851 04:00:11 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:02:56.851 04:00:11 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:02:56.851 04:00:11 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:02:56.851 04:00:11 -- setup/common.sh@17 -- # local get=HugePages_Total 00:02:56.851 04:00:11 -- setup/common.sh@18 -- # local node= 00:02:56.851 04:00:11 -- setup/common.sh@19 -- # local var val 00:02:56.851 04:00:11 -- setup/common.sh@20 -- # local mem_f mem 00:02:56.851 04:00:11 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:56.851 04:00:11 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:56.851 04:00:11 -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:56.851 04:00:11 -- setup/common.sh@28 -- # mapfile -t mem 00:02:56.851 04:00:11 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:56.851 04:00:11 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.851 04:00:11 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.851 04:00:11 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 258558476 kB' 'MemFree: 241619688 kB' 'MemAvailable: 245256532 kB' 'Buffers: 2696 kB' 'Cached: 10893684 kB' 'SwapCached: 0 kB' 'Active: 7034880 kB' 'Inactive: 4390304 kB' 'Active(anon): 6464064 kB' 'Inactive(anon): 0 kB' 'Active(file): 570816 kB' 'Inactive(file): 4390304 kB' 'Unevictable: 9000 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 538504 kB' 'Mapped: 210772 kB' 'Shmem: 5935260 kB' 'KReclaimable: 309552 kB' 'Slab: 955356 kB' 'SReclaimable: 309552 kB' 'SUnreclaim: 645804 kB' 'KernelStack: 24896 kB' 'PageTables: 9460 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 136619264 kB' 'Committed_AS: 8060620 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 329844 kB' 'VmallocChunk: 0 kB' 'Percpu: 104448 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3262528 kB' 'DirectMap2M: 16437248 kB' 'DirectMap1G: 250609664 kB' 00:02:56.851 04:00:11 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.851 04:00:11 -- setup/common.sh@32 -- # continue 00:02:56.851 04:00:11 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.851 04:00:11 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.851 04:00:11 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.851 04:00:11 -- setup/common.sh@32 -- # continue 00:02:56.851 04:00:11 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.851 04:00:11 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.851 04:00:11 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.851 04:00:11 -- setup/common.sh@32 -- # continue 00:02:56.851 04:00:11 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.851 04:00:11 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.851 04:00:11 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.851 04:00:11 -- setup/common.sh@32 -- # continue 00:02:56.851 04:00:11 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.851 04:00:11 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.851 04:00:11 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.851 04:00:11 -- setup/common.sh@32 -- # continue 00:02:56.851 04:00:11 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.852 04:00:11 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.852 04:00:11 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.852 04:00:11 -- setup/common.sh@32 -- # continue 00:02:56.852 04:00:11 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.852 04:00:11 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.852 04:00:11 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.852 04:00:11 -- setup/common.sh@32 -- # continue 00:02:56.852 04:00:11 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.852 04:00:11 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.852 04:00:11 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.852 04:00:11 -- setup/common.sh@32 -- # continue 00:02:56.852 04:00:11 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.852 04:00:11 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.852 04:00:11 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.852 04:00:11 -- setup/common.sh@32 -- # continue 00:02:56.852 04:00:11 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.852 04:00:11 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.852 04:00:11 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.852 04:00:11 -- setup/common.sh@32 -- # continue 00:02:56.852 04:00:11 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.852 04:00:11 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.852 04:00:11 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.852 04:00:11 -- setup/common.sh@32 -- # continue 00:02:56.852 04:00:11 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.852 04:00:11 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.852 04:00:11 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.852 04:00:11 -- setup/common.sh@32 -- # continue 00:02:56.852 04:00:11 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.852 04:00:11 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.852 04:00:11 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.852 04:00:11 -- setup/common.sh@32 -- # continue 00:02:56.852 04:00:11 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.852 04:00:11 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.852 04:00:11 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.852 04:00:11 -- setup/common.sh@32 -- # continue 00:02:56.852 04:00:11 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.852 04:00:11 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.852 04:00:11 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.852 04:00:11 -- setup/common.sh@32 -- # continue 00:02:56.852 04:00:11 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.852 04:00:11 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.852 04:00:11 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.852 04:00:11 -- setup/common.sh@32 -- # continue 00:02:56.852 04:00:11 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.852 04:00:11 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.852 04:00:11 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.852 04:00:11 -- setup/common.sh@32 -- # continue 00:02:56.852 04:00:11 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.852 04:00:11 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.852 04:00:11 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.852 04:00:11 -- setup/common.sh@32 -- # continue 00:02:56.852 04:00:11 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.852 04:00:11 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.852 04:00:11 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.852 04:00:11 -- setup/common.sh@32 -- # continue 00:02:56.852 04:00:11 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.852 04:00:11 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.852 04:00:11 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.852 04:00:11 -- setup/common.sh@32 -- # continue 00:02:56.852 04:00:11 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.852 04:00:11 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.852 04:00:11 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.852 04:00:11 -- setup/common.sh@32 -- # continue 00:02:56.852 04:00:11 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.852 04:00:11 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.852 04:00:11 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.852 04:00:11 -- setup/common.sh@32 -- # continue 00:02:56.852 04:00:11 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.852 04:00:11 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.852 04:00:11 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.852 04:00:11 -- setup/common.sh@32 -- # continue 00:02:56.852 04:00:11 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.852 04:00:11 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.852 04:00:11 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.852 04:00:11 -- setup/common.sh@32 -- # continue 00:02:56.852 04:00:11 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.852 04:00:11 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.852 04:00:11 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.852 04:00:11 -- setup/common.sh@32 -- # continue 00:02:56.852 04:00:11 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.852 04:00:11 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.852 04:00:11 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.852 04:00:11 -- setup/common.sh@32 -- # continue 00:02:56.852 04:00:11 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.852 04:00:11 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.852 04:00:11 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.852 04:00:11 -- setup/common.sh@32 -- # continue 00:02:56.852 04:00:11 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.852 04:00:11 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.852 04:00:11 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.852 04:00:11 -- setup/common.sh@32 -- # continue 00:02:56.852 04:00:11 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.852 04:00:11 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.852 04:00:11 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.852 04:00:11 -- setup/common.sh@32 -- # continue 00:02:56.852 04:00:11 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.852 04:00:11 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.852 04:00:11 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.852 04:00:11 -- setup/common.sh@32 -- # continue 00:02:56.852 04:00:11 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.852 04:00:11 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.852 04:00:11 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.852 04:00:11 -- setup/common.sh@32 -- # continue 00:02:56.852 04:00:11 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.852 04:00:11 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.852 04:00:11 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.852 04:00:11 -- setup/common.sh@32 -- # continue 00:02:56.852 04:00:11 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.852 04:00:11 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.852 04:00:11 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.852 04:00:11 -- setup/common.sh@32 -- # continue 00:02:56.852 04:00:11 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.852 04:00:11 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.852 04:00:11 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.852 04:00:11 -- setup/common.sh@32 -- # continue 00:02:56.852 04:00:11 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.852 04:00:11 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.852 04:00:11 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.852 04:00:11 -- setup/common.sh@32 -- # continue 00:02:56.852 04:00:11 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.852 04:00:11 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.852 04:00:11 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.852 04:00:11 -- setup/common.sh@32 -- # continue 00:02:56.852 04:00:11 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.852 04:00:11 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.852 04:00:11 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.853 04:00:11 -- setup/common.sh@32 -- # continue 00:02:56.853 04:00:11 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.853 04:00:11 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.853 04:00:11 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.853 04:00:11 -- setup/common.sh@32 -- # continue 00:02:56.853 04:00:11 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.853 04:00:11 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.853 04:00:11 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.853 04:00:11 -- setup/common.sh@32 -- # continue 00:02:56.853 04:00:11 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.853 04:00:11 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.853 04:00:11 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.853 04:00:11 -- setup/common.sh@32 -- # continue 00:02:56.853 04:00:11 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.853 04:00:11 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.853 04:00:11 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.853 04:00:11 -- setup/common.sh@32 -- # continue 00:02:56.853 04:00:11 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.853 04:00:11 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.853 04:00:11 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.853 04:00:11 -- setup/common.sh@32 -- # continue 00:02:56.853 04:00:11 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.853 04:00:11 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.853 04:00:11 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.853 04:00:11 -- setup/common.sh@32 -- # continue 00:02:56.853 04:00:11 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.853 04:00:11 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.853 04:00:11 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.853 04:00:11 -- setup/common.sh@32 -- # continue 00:02:56.853 04:00:11 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.853 04:00:11 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.853 04:00:11 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.853 04:00:11 -- setup/common.sh@32 -- # continue 00:02:56.853 04:00:11 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.853 04:00:11 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.853 04:00:11 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.853 04:00:11 -- setup/common.sh@32 -- # continue 00:02:56.853 04:00:11 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.853 04:00:11 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.853 04:00:11 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.853 04:00:11 -- setup/common.sh@32 -- # continue 00:02:56.853 04:00:11 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.853 04:00:11 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.853 04:00:11 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.853 04:00:11 -- setup/common.sh@32 -- # continue 00:02:56.853 04:00:11 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.853 04:00:11 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.853 04:00:11 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.853 04:00:11 -- setup/common.sh@33 -- # echo 1024 00:02:56.853 04:00:11 -- setup/common.sh@33 -- # return 0 00:02:56.853 04:00:11 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:02:56.853 04:00:11 -- setup/hugepages.sh@112 -- # get_nodes 00:02:56.853 04:00:11 -- setup/hugepages.sh@27 -- # local node 00:02:56.853 04:00:11 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:56.853 04:00:11 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:02:56.853 04:00:11 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:56.853 04:00:11 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:02:56.853 04:00:11 -- setup/hugepages.sh@32 -- # no_nodes=2 00:02:56.853 04:00:11 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:02:56.853 04:00:11 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:02:56.853 04:00:11 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:02:56.853 04:00:11 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:02:56.853 04:00:11 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:56.853 04:00:11 -- setup/common.sh@18 -- # local node=0 00:02:56.853 04:00:11 -- setup/common.sh@19 -- # local var val 00:02:56.853 04:00:11 -- setup/common.sh@20 -- # local mem_f mem 00:02:56.853 04:00:11 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:56.853 04:00:11 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:02:56.853 04:00:11 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:02:56.853 04:00:11 -- setup/common.sh@28 -- # mapfile -t mem 00:02:56.853 04:00:11 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:56.853 04:00:11 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.853 04:00:11 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.853 04:00:11 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 131816228 kB' 'MemFree: 118489440 kB' 'MemUsed: 13326788 kB' 'SwapCached: 0 kB' 'Active: 5626204 kB' 'Inactive: 3992136 kB' 'Active(anon): 5213328 kB' 'Inactive(anon): 0 kB' 'Active(file): 412876 kB' 'Inactive(file): 3992136 kB' 'Unevictable: 9000 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 9296052 kB' 'Mapped: 162592 kB' 'AnonPages: 331312 kB' 'Shmem: 4891040 kB' 'KernelStack: 13944 kB' 'PageTables: 6884 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 211376 kB' 'Slab: 549144 kB' 'SReclaimable: 211376 kB' 'SUnreclaim: 337768 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:02:56.853 04:00:11 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.853 04:00:11 -- setup/common.sh@32 -- # continue 00:02:56.853 04:00:11 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.853 04:00:11 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.853 04:00:11 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.853 04:00:11 -- setup/common.sh@32 -- # continue 00:02:56.853 04:00:11 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.853 04:00:11 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.853 04:00:11 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.853 04:00:11 -- setup/common.sh@32 -- # continue 00:02:56.853 04:00:11 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.853 04:00:11 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.853 04:00:11 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.853 04:00:11 -- setup/common.sh@32 -- # continue 00:02:56.853 04:00:11 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.853 04:00:11 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.853 04:00:11 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.853 04:00:11 -- setup/common.sh@32 -- # continue 00:02:56.853 04:00:11 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.853 04:00:11 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.853 04:00:11 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.853 04:00:11 -- setup/common.sh@32 -- # continue 00:02:56.853 04:00:11 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.853 04:00:11 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.853 04:00:11 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.853 04:00:11 -- setup/common.sh@32 -- # continue 00:02:56.853 04:00:11 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.853 04:00:11 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.853 04:00:11 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.853 04:00:11 -- setup/common.sh@32 -- # continue 00:02:56.853 04:00:11 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.853 04:00:11 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.853 04:00:11 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.853 04:00:11 -- setup/common.sh@32 -- # continue 00:02:56.853 04:00:11 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.853 04:00:11 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.853 04:00:11 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.853 04:00:11 -- setup/common.sh@32 -- # continue 00:02:56.853 04:00:11 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.853 04:00:11 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.853 04:00:11 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.853 04:00:11 -- setup/common.sh@32 -- # continue 00:02:56.853 04:00:11 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.853 04:00:11 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.853 04:00:11 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.853 04:00:11 -- setup/common.sh@32 -- # continue 00:02:56.853 04:00:11 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.853 04:00:11 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.853 04:00:11 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.853 04:00:11 -- setup/common.sh@32 -- # continue 00:02:56.853 04:00:11 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.853 04:00:11 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.853 04:00:11 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.853 04:00:11 -- setup/common.sh@32 -- # continue 00:02:56.853 04:00:11 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.853 04:00:11 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.853 04:00:11 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.853 04:00:11 -- setup/common.sh@32 -- # continue 00:02:56.853 04:00:11 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.853 04:00:11 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.853 04:00:11 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.853 04:00:11 -- setup/common.sh@32 -- # continue 00:02:56.853 04:00:11 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.853 04:00:11 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.853 04:00:11 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.853 04:00:11 -- setup/common.sh@32 -- # continue 00:02:56.853 04:00:11 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.854 04:00:11 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.854 04:00:11 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.854 04:00:11 -- setup/common.sh@32 -- # continue 00:02:56.854 04:00:11 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.854 04:00:11 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.854 04:00:11 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.854 04:00:11 -- setup/common.sh@32 -- # continue 00:02:56.854 04:00:11 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.854 04:00:11 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.854 04:00:11 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.854 04:00:11 -- setup/common.sh@32 -- # continue 00:02:56.854 04:00:11 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.854 04:00:11 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.854 04:00:11 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.854 04:00:11 -- setup/common.sh@32 -- # continue 00:02:56.854 04:00:11 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.854 04:00:11 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.854 04:00:11 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.854 04:00:11 -- setup/common.sh@32 -- # continue 00:02:56.854 04:00:11 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.854 04:00:11 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.854 04:00:11 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.854 04:00:11 -- setup/common.sh@32 -- # continue 00:02:56.854 04:00:11 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.854 04:00:11 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.854 04:00:11 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.854 04:00:11 -- setup/common.sh@32 -- # continue 00:02:56.854 04:00:11 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.854 04:00:11 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.854 04:00:11 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.854 04:00:11 -- setup/common.sh@32 -- # continue 00:02:56.854 04:00:11 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.854 04:00:11 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.854 04:00:11 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.854 04:00:11 -- setup/common.sh@32 -- # continue 00:02:56.854 04:00:11 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.854 04:00:11 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.854 04:00:11 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.854 04:00:11 -- setup/common.sh@32 -- # continue 00:02:56.854 04:00:11 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.854 04:00:11 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.854 04:00:11 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.854 04:00:11 -- setup/common.sh@32 -- # continue 00:02:56.854 04:00:11 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.854 04:00:11 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.854 04:00:11 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.854 04:00:11 -- setup/common.sh@32 -- # continue 00:02:56.854 04:00:11 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.854 04:00:11 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.854 04:00:11 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.854 04:00:11 -- setup/common.sh@32 -- # continue 00:02:56.854 04:00:11 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.854 04:00:11 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.854 04:00:11 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.854 04:00:11 -- setup/common.sh@32 -- # continue 00:02:56.854 04:00:11 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.854 04:00:11 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.854 04:00:11 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.854 04:00:11 -- setup/common.sh@32 -- # continue 00:02:56.854 04:00:11 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.854 04:00:11 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.854 04:00:11 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.854 04:00:11 -- setup/common.sh@32 -- # continue 00:02:56.854 04:00:11 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.854 04:00:11 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.854 04:00:11 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.854 04:00:11 -- setup/common.sh@32 -- # continue 00:02:56.854 04:00:11 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.854 04:00:11 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.854 04:00:11 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.854 04:00:11 -- setup/common.sh@32 -- # continue 00:02:56.854 04:00:11 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.854 04:00:11 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.854 04:00:11 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.854 04:00:11 -- setup/common.sh@32 -- # continue 00:02:56.854 04:00:11 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.854 04:00:11 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.854 04:00:11 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.854 04:00:11 -- setup/common.sh@33 -- # echo 0 00:02:56.854 04:00:11 -- setup/common.sh@33 -- # return 0 00:02:56.854 04:00:11 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:02:56.854 04:00:11 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:02:56.854 04:00:11 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:02:56.854 04:00:11 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:02:56.854 04:00:11 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:02:56.854 node0=1024 expecting 1024 00:02:56.854 04:00:11 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:02:56.854 00:02:56.854 real 0m5.763s 00:02:56.854 user 0m1.936s 00:02:56.854 sys 0m3.535s 00:02:56.854 04:00:11 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:02:56.854 04:00:11 -- common/autotest_common.sh@10 -- # set +x 00:02:56.854 ************************************ 00:02:56.854 END TEST no_shrink_alloc 00:02:56.854 ************************************ 00:02:56.854 04:00:11 -- setup/hugepages.sh@217 -- # clear_hp 00:02:56.854 04:00:11 -- setup/hugepages.sh@37 -- # local node hp 00:02:56.854 04:00:11 -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:02:56.854 04:00:11 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:02:56.854 04:00:11 -- setup/hugepages.sh@41 -- # echo 0 00:02:56.854 04:00:11 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:02:56.854 04:00:11 -- setup/hugepages.sh@41 -- # echo 0 00:02:56.854 04:00:11 -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:02:56.854 04:00:11 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:02:56.854 04:00:11 -- setup/hugepages.sh@41 -- # echo 0 00:02:56.854 04:00:11 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:02:56.854 04:00:11 -- setup/hugepages.sh@41 -- # echo 0 00:02:56.854 04:00:11 -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:02:56.854 04:00:11 -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:02:56.854 00:02:56.854 real 0m24.150s 00:02:56.854 user 0m7.469s 00:02:56.854 sys 0m13.528s 00:02:56.854 04:00:11 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:02:56.854 04:00:11 -- common/autotest_common.sh@10 -- # set +x 00:02:56.854 ************************************ 00:02:56.854 END TEST hugepages 00:02:56.854 ************************************ 00:02:56.854 04:00:11 -- setup/test-setup.sh@14 -- # run_test driver /var/jenkins/workspace/dsa-phy-autotest/spdk/test/setup/driver.sh 00:02:56.854 04:00:11 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:02:56.854 04:00:11 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:02:56.854 04:00:11 -- common/autotest_common.sh@10 -- # set +x 00:02:56.854 ************************************ 00:02:56.854 START TEST driver 00:02:56.854 ************************************ 00:02:56.854 04:00:11 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/setup/driver.sh 00:02:56.854 * Looking for test storage... 00:02:56.854 * Found test storage at /var/jenkins/workspace/dsa-phy-autotest/spdk/test/setup 00:02:56.854 04:00:11 -- setup/driver.sh@68 -- # setup reset 00:02:56.854 04:00:11 -- setup/common.sh@9 -- # [[ reset == output ]] 00:02:56.854 04:00:11 -- setup/common.sh@12 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/setup.sh reset 00:03:01.054 04:00:15 -- setup/driver.sh@69 -- # run_test guess_driver guess_driver 00:03:01.054 04:00:15 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:03:01.054 04:00:15 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:03:01.054 04:00:15 -- common/autotest_common.sh@10 -- # set +x 00:03:01.054 ************************************ 00:03:01.054 START TEST guess_driver 00:03:01.054 ************************************ 00:03:01.054 04:00:15 -- common/autotest_common.sh@1104 -- # guess_driver 00:03:01.054 04:00:15 -- setup/driver.sh@46 -- # local driver setup_driver marker 00:03:01.054 04:00:15 -- setup/driver.sh@47 -- # local fail=0 00:03:01.054 04:00:15 -- setup/driver.sh@49 -- # pick_driver 00:03:01.054 04:00:15 -- setup/driver.sh@36 -- # vfio 00:03:01.054 04:00:15 -- setup/driver.sh@21 -- # local iommu_grups 00:03:01.054 04:00:15 -- setup/driver.sh@22 -- # local unsafe_vfio 00:03:01.054 04:00:15 -- setup/driver.sh@24 -- # [[ -e /sys/module/vfio/parameters/enable_unsafe_noiommu_mode ]] 00:03:01.054 04:00:15 -- setup/driver.sh@25 -- # unsafe_vfio=N 00:03:01.054 04:00:15 -- setup/driver.sh@27 -- # iommu_groups=(/sys/kernel/iommu_groups/*) 00:03:01.054 04:00:15 -- setup/driver.sh@29 -- # (( 334 > 0 )) 00:03:01.054 04:00:15 -- setup/driver.sh@30 -- # is_driver vfio_pci 00:03:01.054 04:00:15 -- setup/driver.sh@14 -- # mod vfio_pci 00:03:01.054 04:00:15 -- setup/driver.sh@12 -- # dep vfio_pci 00:03:01.054 04:00:15 -- setup/driver.sh@11 -- # modprobe --show-depends vfio_pci 00:03:01.054 04:00:15 -- setup/driver.sh@12 -- # [[ insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/virt/lib/irqbypass.ko.xz 00:03:01.054 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:03:01.054 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:03:01.054 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:03:01.054 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:03:01.054 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio_iommu_type1.ko.xz 00:03:01.054 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci-core.ko.xz 00:03:01.054 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci.ko.xz == *\.\k\o* ]] 00:03:01.054 04:00:15 -- setup/driver.sh@30 -- # return 0 00:03:01.054 04:00:15 -- setup/driver.sh@37 -- # echo vfio-pci 00:03:01.054 04:00:15 -- setup/driver.sh@49 -- # driver=vfio-pci 00:03:01.054 04:00:15 -- setup/driver.sh@51 -- # [[ vfio-pci == \N\o\ \v\a\l\i\d\ \d\r\i\v\e\r\ \f\o\u\n\d ]] 00:03:01.054 04:00:15 -- setup/driver.sh@56 -- # echo 'Looking for driver=vfio-pci' 00:03:01.054 Looking for driver=vfio-pci 00:03:01.054 04:00:15 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:01.054 04:00:15 -- setup/driver.sh@45 -- # setup output config 00:03:01.054 04:00:15 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:01.054 04:00:15 -- setup/common.sh@10 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/setup.sh config 00:03:04.370 04:00:18 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:04.370 04:00:18 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:04.370 04:00:18 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:04.370 04:00:18 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:04.370 04:00:18 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:04.370 04:00:18 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:04.370 04:00:18 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:04.370 04:00:18 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:04.370 04:00:18 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:04.370 04:00:18 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:04.370 04:00:18 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:04.370 04:00:18 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:04.370 04:00:18 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:04.370 04:00:18 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:04.370 04:00:18 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:04.370 04:00:18 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:04.370 04:00:18 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:04.370 04:00:18 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:04.370 04:00:18 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:04.370 04:00:18 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:04.370 04:00:18 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:04.370 04:00:18 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:04.370 04:00:18 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:04.370 04:00:18 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:04.370 04:00:18 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:04.370 04:00:18 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:04.370 04:00:18 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:04.370 04:00:18 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:04.370 04:00:18 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:04.370 04:00:18 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:04.370 04:00:18 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:04.370 04:00:18 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:04.370 04:00:18 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:04.370 04:00:18 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:04.370 04:00:18 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:04.370 04:00:18 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:04.370 04:00:18 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:04.370 04:00:18 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:04.370 04:00:18 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:04.370 04:00:18 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:04.370 04:00:18 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:04.370 04:00:18 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:04.370 04:00:18 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:04.370 04:00:18 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:04.370 04:00:18 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:04.370 04:00:18 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:04.370 04:00:18 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:04.370 04:00:18 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:05.753 04:00:20 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:05.753 04:00:20 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:05.753 04:00:20 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:06.324 04:00:20 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:06.324 04:00:20 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:06.324 04:00:20 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:06.324 04:00:20 -- setup/driver.sh@64 -- # (( fail == 0 )) 00:03:06.324 04:00:20 -- setup/driver.sh@65 -- # setup reset 00:03:06.324 04:00:20 -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:06.324 04:00:20 -- setup/common.sh@12 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/setup.sh reset 00:03:10.523 00:03:10.523 real 0m9.441s 00:03:10.523 user 0m1.958s 00:03:10.523 sys 0m4.163s 00:03:10.523 04:00:24 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:10.523 04:00:24 -- common/autotest_common.sh@10 -- # set +x 00:03:10.523 ************************************ 00:03:10.523 END TEST guess_driver 00:03:10.523 ************************************ 00:03:10.523 00:03:10.523 real 0m13.774s 00:03:10.523 user 0m3.073s 00:03:10.523 sys 0m6.438s 00:03:10.523 04:00:25 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:10.523 04:00:25 -- common/autotest_common.sh@10 -- # set +x 00:03:10.523 ************************************ 00:03:10.523 END TEST driver 00:03:10.523 ************************************ 00:03:10.523 04:00:25 -- setup/test-setup.sh@15 -- # run_test devices /var/jenkins/workspace/dsa-phy-autotest/spdk/test/setup/devices.sh 00:03:10.523 04:00:25 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:03:10.523 04:00:25 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:03:10.523 04:00:25 -- common/autotest_common.sh@10 -- # set +x 00:03:10.523 ************************************ 00:03:10.523 START TEST devices 00:03:10.523 ************************************ 00:03:10.523 04:00:25 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/setup/devices.sh 00:03:10.784 * Looking for test storage... 00:03:10.784 * Found test storage at /var/jenkins/workspace/dsa-phy-autotest/spdk/test/setup 00:03:10.784 04:00:25 -- setup/devices.sh@190 -- # trap cleanup EXIT 00:03:10.784 04:00:25 -- setup/devices.sh@192 -- # setup reset 00:03:10.784 04:00:25 -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:10.784 04:00:25 -- setup/common.sh@12 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/setup.sh reset 00:03:14.086 04:00:28 -- setup/devices.sh@194 -- # get_zoned_devs 00:03:14.086 04:00:28 -- common/autotest_common.sh@1654 -- # zoned_devs=() 00:03:14.086 04:00:28 -- common/autotest_common.sh@1654 -- # local -gA zoned_devs 00:03:14.086 04:00:28 -- common/autotest_common.sh@1655 -- # local nvme bdf 00:03:14.086 04:00:28 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:03:14.086 04:00:28 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme0n1 00:03:14.086 04:00:28 -- common/autotest_common.sh@1647 -- # local device=nvme0n1 00:03:14.086 04:00:28 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:14.086 04:00:28 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:03:14.086 04:00:28 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:03:14.086 04:00:28 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme1n1 00:03:14.086 04:00:28 -- common/autotest_common.sh@1647 -- # local device=nvme1n1 00:03:14.086 04:00:28 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:03:14.086 04:00:28 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:03:14.086 04:00:28 -- setup/devices.sh@196 -- # blocks=() 00:03:14.086 04:00:28 -- setup/devices.sh@196 -- # declare -a blocks 00:03:14.086 04:00:28 -- setup/devices.sh@197 -- # blocks_to_pci=() 00:03:14.086 04:00:28 -- setup/devices.sh@197 -- # declare -A blocks_to_pci 00:03:14.086 04:00:28 -- setup/devices.sh@198 -- # min_disk_size=3221225472 00:03:14.086 04:00:28 -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:03:14.086 04:00:28 -- setup/devices.sh@201 -- # ctrl=nvme0n1 00:03:14.086 04:00:28 -- setup/devices.sh@201 -- # ctrl=nvme0 00:03:14.086 04:00:28 -- setup/devices.sh@202 -- # pci=0000:c9:00.0 00:03:14.086 04:00:28 -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\c\9\:\0\0\.\0* ]] 00:03:14.086 04:00:28 -- setup/devices.sh@204 -- # block_in_use nvme0n1 00:03:14.086 04:00:28 -- scripts/common.sh@380 -- # local block=nvme0n1 pt 00:03:14.086 04:00:28 -- scripts/common.sh@389 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:03:14.086 No valid GPT data, bailing 00:03:14.086 04:00:28 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:03:14.086 04:00:28 -- scripts/common.sh@393 -- # pt= 00:03:14.086 04:00:28 -- scripts/common.sh@394 -- # return 1 00:03:14.086 04:00:28 -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n1 00:03:14.086 04:00:28 -- setup/common.sh@76 -- # local dev=nvme0n1 00:03:14.086 04:00:28 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:03:14.086 04:00:28 -- setup/common.sh@80 -- # echo 2000398934016 00:03:14.086 04:00:28 -- setup/devices.sh@204 -- # (( 2000398934016 >= min_disk_size )) 00:03:14.086 04:00:28 -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:03:14.086 04:00:28 -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:c9:00.0 00:03:14.086 04:00:28 -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:03:14.086 04:00:28 -- setup/devices.sh@201 -- # ctrl=nvme1n1 00:03:14.086 04:00:28 -- setup/devices.sh@201 -- # ctrl=nvme1 00:03:14.086 04:00:28 -- setup/devices.sh@202 -- # pci=0000:ca:00.0 00:03:14.086 04:00:28 -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\c\a\:\0\0\.\0* ]] 00:03:14.086 04:00:28 -- setup/devices.sh@204 -- # block_in_use nvme1n1 00:03:14.086 04:00:28 -- scripts/common.sh@380 -- # local block=nvme1n1 pt 00:03:14.086 04:00:28 -- scripts/common.sh@389 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/spdk-gpt.py nvme1n1 00:03:14.086 No valid GPT data, bailing 00:03:14.086 04:00:28 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:03:14.086 04:00:28 -- scripts/common.sh@393 -- # pt= 00:03:14.086 04:00:28 -- scripts/common.sh@394 -- # return 1 00:03:14.086 04:00:28 -- setup/devices.sh@204 -- # sec_size_to_bytes nvme1n1 00:03:14.086 04:00:28 -- setup/common.sh@76 -- # local dev=nvme1n1 00:03:14.086 04:00:28 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme1n1 ]] 00:03:14.086 04:00:28 -- setup/common.sh@80 -- # echo 2000398934016 00:03:14.086 04:00:28 -- setup/devices.sh@204 -- # (( 2000398934016 >= min_disk_size )) 00:03:14.086 04:00:28 -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:03:14.086 04:00:28 -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:ca:00.0 00:03:14.086 04:00:28 -- setup/devices.sh@209 -- # (( 2 > 0 )) 00:03:14.086 04:00:28 -- setup/devices.sh@211 -- # declare -r test_disk=nvme0n1 00:03:14.086 04:00:28 -- setup/devices.sh@213 -- # run_test nvme_mount nvme_mount 00:03:14.086 04:00:28 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:03:14.086 04:00:28 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:03:14.086 04:00:28 -- common/autotest_common.sh@10 -- # set +x 00:03:14.086 ************************************ 00:03:14.086 START TEST nvme_mount 00:03:14.086 ************************************ 00:03:14.086 04:00:28 -- common/autotest_common.sh@1104 -- # nvme_mount 00:03:14.086 04:00:28 -- setup/devices.sh@95 -- # nvme_disk=nvme0n1 00:03:14.086 04:00:28 -- setup/devices.sh@96 -- # nvme_disk_p=nvme0n1p1 00:03:14.086 04:00:28 -- setup/devices.sh@97 -- # nvme_mount=/var/jenkins/workspace/dsa-phy-autotest/spdk/test/setup/nvme_mount 00:03:14.086 04:00:28 -- setup/devices.sh@98 -- # nvme_dummy_test_file=/var/jenkins/workspace/dsa-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:14.086 04:00:28 -- setup/devices.sh@101 -- # partition_drive nvme0n1 1 00:03:14.086 04:00:28 -- setup/common.sh@39 -- # local disk=nvme0n1 00:03:14.086 04:00:28 -- setup/common.sh@40 -- # local part_no=1 00:03:14.087 04:00:28 -- setup/common.sh@41 -- # local size=1073741824 00:03:14.087 04:00:28 -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:03:14.087 04:00:28 -- setup/common.sh@44 -- # parts=() 00:03:14.087 04:00:28 -- setup/common.sh@44 -- # local parts 00:03:14.087 04:00:28 -- setup/common.sh@46 -- # (( part = 1 )) 00:03:14.087 04:00:28 -- setup/common.sh@46 -- # (( part <= part_no )) 00:03:14.087 04:00:28 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:03:14.087 04:00:28 -- setup/common.sh@46 -- # (( part++ )) 00:03:14.087 04:00:28 -- setup/common.sh@46 -- # (( part <= part_no )) 00:03:14.087 04:00:28 -- setup/common.sh@51 -- # (( size /= 512 )) 00:03:14.087 04:00:28 -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:03:14.087 04:00:28 -- setup/common.sh@53 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 00:03:15.051 Creating new GPT entries in memory. 00:03:15.051 GPT data structures destroyed! You may now partition the disk using fdisk or 00:03:15.051 other utilities. 00:03:15.051 04:00:29 -- setup/common.sh@57 -- # (( part = 1 )) 00:03:15.051 04:00:29 -- setup/common.sh@57 -- # (( part <= part_no )) 00:03:15.051 04:00:29 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:03:15.051 04:00:29 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:03:15.051 04:00:29 -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:03:16.434 Creating new GPT entries in memory. 00:03:16.434 The operation has completed successfully. 00:03:16.434 04:00:30 -- setup/common.sh@57 -- # (( part++ )) 00:03:16.434 04:00:30 -- setup/common.sh@57 -- # (( part <= part_no )) 00:03:16.434 04:00:30 -- setup/common.sh@62 -- # wait 3769076 00:03:16.434 04:00:30 -- setup/devices.sh@102 -- # mkfs /dev/nvme0n1p1 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/setup/nvme_mount 00:03:16.434 04:00:30 -- setup/common.sh@66 -- # local dev=/dev/nvme0n1p1 mount=/var/jenkins/workspace/dsa-phy-autotest/spdk/test/setup/nvme_mount size= 00:03:16.434 04:00:30 -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/dsa-phy-autotest/spdk/test/setup/nvme_mount 00:03:16.434 04:00:30 -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1p1 ]] 00:03:16.434 04:00:30 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1p1 00:03:16.434 04:00:30 -- setup/common.sh@72 -- # mount /dev/nvme0n1p1 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/setup/nvme_mount 00:03:16.434 04:00:30 -- setup/devices.sh@105 -- # verify 0000:c9:00.0 nvme0n1:nvme0n1p1 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/dsa-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:16.434 04:00:30 -- setup/devices.sh@48 -- # local dev=0000:c9:00.0 00:03:16.434 04:00:30 -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1p1 00:03:16.434 04:00:30 -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/dsa-phy-autotest/spdk/test/setup/nvme_mount 00:03:16.434 04:00:30 -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/dsa-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:16.434 04:00:30 -- setup/devices.sh@53 -- # local found=0 00:03:16.434 04:00:30 -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/dsa-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:03:16.434 04:00:30 -- setup/devices.sh@56 -- # : 00:03:16.434 04:00:30 -- setup/devices.sh@59 -- # local pci status 00:03:16.434 04:00:30 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:16.434 04:00:30 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:c9:00.0 00:03:16.434 04:00:30 -- setup/devices.sh@47 -- # setup output config 00:03:16.434 04:00:30 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:16.434 04:00:30 -- setup/common.sh@10 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/setup.sh config 00:03:18.978 04:00:33 -- setup/devices.sh@62 -- # [[ 0000:c9:00.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:03:18.978 04:00:33 -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1p1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1\p\1* ]] 00:03:18.978 04:00:33 -- setup/devices.sh@63 -- # found=1 00:03:18.978 04:00:33 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:18.978 04:00:33 -- setup/devices.sh@62 -- # [[ 0000:ca:00.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:03:18.978 04:00:33 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:18.978 04:00:33 -- setup/devices.sh@62 -- # [[ 0000:6a:01.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:03:18.978 04:00:33 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:18.978 04:00:33 -- setup/devices.sh@62 -- # [[ 0000:6f:01.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:03:18.978 04:00:33 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:18.978 04:00:33 -- setup/devices.sh@62 -- # [[ 0000:74:01.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:03:18.978 04:00:33 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:18.978 04:00:33 -- setup/devices.sh@62 -- # [[ 0000:79:01.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:03:18.978 04:00:33 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:18.978 04:00:33 -- setup/devices.sh@62 -- # [[ 0000:e7:01.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:03:18.978 04:00:33 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:18.978 04:00:33 -- setup/devices.sh@62 -- # [[ 0000:ec:01.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:03:18.978 04:00:33 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:18.978 04:00:33 -- setup/devices.sh@62 -- # [[ 0000:f1:01.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:03:18.978 04:00:33 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:18.978 04:00:33 -- setup/devices.sh@62 -- # [[ 0000:f6:01.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:03:18.978 04:00:33 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:18.978 04:00:33 -- setup/devices.sh@62 -- # [[ 0000:6a:02.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:03:18.978 04:00:33 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:18.978 04:00:33 -- setup/devices.sh@62 -- # [[ 0000:6f:02.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:03:18.978 04:00:33 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:18.978 04:00:33 -- setup/devices.sh@62 -- # [[ 0000:74:02.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:03:18.978 04:00:33 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:18.978 04:00:33 -- setup/devices.sh@62 -- # [[ 0000:79:02.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:03:18.978 04:00:33 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:18.978 04:00:33 -- setup/devices.sh@62 -- # [[ 0000:e7:02.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:03:18.978 04:00:33 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:18.978 04:00:33 -- setup/devices.sh@62 -- # [[ 0000:ec:02.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:03:18.978 04:00:33 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:19.262 04:00:33 -- setup/devices.sh@62 -- # [[ 0000:f1:02.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:03:19.262 04:00:33 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:19.262 04:00:33 -- setup/devices.sh@62 -- # [[ 0000:f6:02.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:03:19.262 04:00:33 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:19.262 04:00:33 -- setup/devices.sh@66 -- # (( found == 1 )) 00:03:19.262 04:00:33 -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/dsa-phy-autotest/spdk/test/setup/nvme_mount ]] 00:03:19.262 04:00:33 -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/dsa-phy-autotest/spdk/test/setup/nvme_mount 00:03:19.262 04:00:33 -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/dsa-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:03:19.262 04:00:33 -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/dsa-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:19.262 04:00:33 -- setup/devices.sh@110 -- # cleanup_nvme 00:03:19.262 04:00:33 -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/dsa-phy-autotest/spdk/test/setup/nvme_mount 00:03:19.262 04:00:33 -- setup/devices.sh@21 -- # umount /var/jenkins/workspace/dsa-phy-autotest/spdk/test/setup/nvme_mount 00:03:19.262 04:00:33 -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:03:19.262 04:00:33 -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:03:19.262 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:03:19.262 04:00:33 -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:03:19.262 04:00:33 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:03:19.579 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:03:19.579 /dev/nvme0n1: 8 bytes were erased at offset 0x1d1c1115e00 (gpt): 45 46 49 20 50 41 52 54 00:03:19.579 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:03:19.579 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:03:19.579 04:00:34 -- setup/devices.sh@113 -- # mkfs /dev/nvme0n1 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/setup/nvme_mount 1024M 00:03:19.579 04:00:34 -- setup/common.sh@66 -- # local dev=/dev/nvme0n1 mount=/var/jenkins/workspace/dsa-phy-autotest/spdk/test/setup/nvme_mount size=1024M 00:03:19.579 04:00:34 -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/dsa-phy-autotest/spdk/test/setup/nvme_mount 00:03:19.579 04:00:34 -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1 ]] 00:03:19.579 04:00:34 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1 1024M 00:03:19.579 04:00:34 -- setup/common.sh@72 -- # mount /dev/nvme0n1 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/setup/nvme_mount 00:03:19.579 04:00:34 -- setup/devices.sh@116 -- # verify 0000:c9:00.0 nvme0n1:nvme0n1 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/dsa-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:19.579 04:00:34 -- setup/devices.sh@48 -- # local dev=0000:c9:00.0 00:03:19.579 04:00:34 -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1 00:03:19.579 04:00:34 -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/dsa-phy-autotest/spdk/test/setup/nvme_mount 00:03:19.579 04:00:34 -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/dsa-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:19.579 04:00:34 -- setup/devices.sh@53 -- # local found=0 00:03:19.579 04:00:34 -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/dsa-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:03:19.579 04:00:34 -- setup/devices.sh@56 -- # : 00:03:19.579 04:00:34 -- setup/devices.sh@59 -- # local pci status 00:03:19.579 04:00:34 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:19.579 04:00:34 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:c9:00.0 00:03:19.579 04:00:34 -- setup/devices.sh@47 -- # setup output config 00:03:19.579 04:00:34 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:19.579 04:00:34 -- setup/common.sh@10 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/setup.sh config 00:03:22.116 04:00:36 -- setup/devices.sh@62 -- # [[ 0000:c9:00.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:03:22.116 04:00:36 -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1* ]] 00:03:22.116 04:00:36 -- setup/devices.sh@63 -- # found=1 00:03:22.116 04:00:36 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:22.116 04:00:36 -- setup/devices.sh@62 -- # [[ 0000:ca:00.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:03:22.116 04:00:36 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:22.116 04:00:36 -- setup/devices.sh@62 -- # [[ 0000:6a:01.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:03:22.116 04:00:36 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:22.116 04:00:36 -- setup/devices.sh@62 -- # [[ 0000:6f:01.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:03:22.116 04:00:36 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:22.116 04:00:36 -- setup/devices.sh@62 -- # [[ 0000:74:01.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:03:22.116 04:00:36 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:22.116 04:00:36 -- setup/devices.sh@62 -- # [[ 0000:79:01.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:03:22.116 04:00:36 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:22.116 04:00:36 -- setup/devices.sh@62 -- # [[ 0000:e7:01.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:03:22.116 04:00:36 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:22.116 04:00:36 -- setup/devices.sh@62 -- # [[ 0000:ec:01.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:03:22.116 04:00:36 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:22.116 04:00:36 -- setup/devices.sh@62 -- # [[ 0000:f1:01.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:03:22.116 04:00:36 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:22.116 04:00:36 -- setup/devices.sh@62 -- # [[ 0000:f6:01.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:03:22.116 04:00:36 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:22.116 04:00:36 -- setup/devices.sh@62 -- # [[ 0000:6a:02.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:03:22.116 04:00:36 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:22.116 04:00:36 -- setup/devices.sh@62 -- # [[ 0000:6f:02.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:03:22.116 04:00:36 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:22.116 04:00:36 -- setup/devices.sh@62 -- # [[ 0000:74:02.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:03:22.116 04:00:36 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:22.116 04:00:36 -- setup/devices.sh@62 -- # [[ 0000:79:02.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:03:22.116 04:00:36 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:22.116 04:00:36 -- setup/devices.sh@62 -- # [[ 0000:e7:02.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:03:22.116 04:00:36 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:22.116 04:00:36 -- setup/devices.sh@62 -- # [[ 0000:ec:02.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:03:22.116 04:00:36 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:22.116 04:00:36 -- setup/devices.sh@62 -- # [[ 0000:f1:02.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:03:22.116 04:00:36 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:22.116 04:00:36 -- setup/devices.sh@62 -- # [[ 0000:f6:02.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:03:22.116 04:00:36 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:22.376 04:00:36 -- setup/devices.sh@66 -- # (( found == 1 )) 00:03:22.376 04:00:36 -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/dsa-phy-autotest/spdk/test/setup/nvme_mount ]] 00:03:22.376 04:00:36 -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/dsa-phy-autotest/spdk/test/setup/nvme_mount 00:03:22.376 04:00:36 -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/dsa-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:03:22.376 04:00:36 -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/dsa-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:22.376 04:00:36 -- setup/devices.sh@123 -- # umount /var/jenkins/workspace/dsa-phy-autotest/spdk/test/setup/nvme_mount 00:03:22.376 04:00:36 -- setup/devices.sh@125 -- # verify 0000:c9:00.0 data@nvme0n1 '' '' 00:03:22.376 04:00:36 -- setup/devices.sh@48 -- # local dev=0000:c9:00.0 00:03:22.376 04:00:36 -- setup/devices.sh@49 -- # local mounts=data@nvme0n1 00:03:22.376 04:00:36 -- setup/devices.sh@50 -- # local mount_point= 00:03:22.376 04:00:36 -- setup/devices.sh@51 -- # local test_file= 00:03:22.376 04:00:36 -- setup/devices.sh@53 -- # local found=0 00:03:22.376 04:00:36 -- setup/devices.sh@55 -- # [[ -n '' ]] 00:03:22.376 04:00:36 -- setup/devices.sh@59 -- # local pci status 00:03:22.376 04:00:36 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:22.376 04:00:36 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:c9:00.0 00:03:22.376 04:00:36 -- setup/devices.sh@47 -- # setup output config 00:03:22.376 04:00:36 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:22.376 04:00:36 -- setup/common.sh@10 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/setup.sh config 00:03:24.911 04:00:39 -- setup/devices.sh@62 -- # [[ 0000:c9:00.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:03:24.911 04:00:39 -- setup/devices.sh@62 -- # [[ Active devices: data@nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\d\a\t\a\@\n\v\m\e\0\n\1* ]] 00:03:24.911 04:00:39 -- setup/devices.sh@63 -- # found=1 00:03:24.911 04:00:39 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:24.911 04:00:39 -- setup/devices.sh@62 -- # [[ 0000:ca:00.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:03:24.911 04:00:39 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:25.171 04:00:39 -- setup/devices.sh@62 -- # [[ 0000:6a:01.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:03:25.171 04:00:39 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:25.171 04:00:39 -- setup/devices.sh@62 -- # [[ 0000:6f:01.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:03:25.171 04:00:39 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:25.171 04:00:39 -- setup/devices.sh@62 -- # [[ 0000:74:01.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:03:25.171 04:00:39 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:25.171 04:00:39 -- setup/devices.sh@62 -- # [[ 0000:79:01.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:03:25.171 04:00:39 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:25.171 04:00:39 -- setup/devices.sh@62 -- # [[ 0000:e7:01.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:03:25.171 04:00:39 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:25.171 04:00:39 -- setup/devices.sh@62 -- # [[ 0000:ec:01.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:03:25.171 04:00:39 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:25.171 04:00:39 -- setup/devices.sh@62 -- # [[ 0000:f1:01.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:03:25.171 04:00:39 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:25.171 04:00:39 -- setup/devices.sh@62 -- # [[ 0000:f6:01.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:03:25.171 04:00:39 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:25.171 04:00:39 -- setup/devices.sh@62 -- # [[ 0000:6a:02.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:03:25.171 04:00:39 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:25.171 04:00:39 -- setup/devices.sh@62 -- # [[ 0000:6f:02.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:03:25.171 04:00:39 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:25.171 04:00:39 -- setup/devices.sh@62 -- # [[ 0000:74:02.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:03:25.171 04:00:39 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:25.171 04:00:39 -- setup/devices.sh@62 -- # [[ 0000:79:02.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:03:25.171 04:00:39 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:25.171 04:00:39 -- setup/devices.sh@62 -- # [[ 0000:e7:02.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:03:25.171 04:00:39 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:25.171 04:00:39 -- setup/devices.sh@62 -- # [[ 0000:ec:02.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:03:25.171 04:00:39 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:25.171 04:00:39 -- setup/devices.sh@62 -- # [[ 0000:f1:02.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:03:25.171 04:00:39 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:25.171 04:00:39 -- setup/devices.sh@62 -- # [[ 0000:f6:02.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:03:25.171 04:00:39 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:25.430 04:00:39 -- setup/devices.sh@66 -- # (( found == 1 )) 00:03:25.430 04:00:39 -- setup/devices.sh@68 -- # [[ -n '' ]] 00:03:25.430 04:00:39 -- setup/devices.sh@68 -- # return 0 00:03:25.430 04:00:39 -- setup/devices.sh@128 -- # cleanup_nvme 00:03:25.430 04:00:39 -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/dsa-phy-autotest/spdk/test/setup/nvme_mount 00:03:25.430 04:00:39 -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:03:25.431 04:00:39 -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:03:25.431 04:00:39 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:03:25.431 /dev/nvme0n1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:03:25.431 00:03:25.431 real 0m11.311s 00:03:25.431 user 0m2.941s 00:03:25.431 sys 0m5.545s 00:03:25.431 04:00:39 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:25.431 04:00:39 -- common/autotest_common.sh@10 -- # set +x 00:03:25.431 ************************************ 00:03:25.431 END TEST nvme_mount 00:03:25.431 ************************************ 00:03:25.431 04:00:39 -- setup/devices.sh@214 -- # run_test dm_mount dm_mount 00:03:25.431 04:00:39 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:03:25.431 04:00:39 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:03:25.431 04:00:39 -- common/autotest_common.sh@10 -- # set +x 00:03:25.431 ************************************ 00:03:25.431 START TEST dm_mount 00:03:25.431 ************************************ 00:03:25.431 04:00:39 -- common/autotest_common.sh@1104 -- # dm_mount 00:03:25.431 04:00:39 -- setup/devices.sh@144 -- # pv=nvme0n1 00:03:25.431 04:00:39 -- setup/devices.sh@145 -- # pv0=nvme0n1p1 00:03:25.431 04:00:39 -- setup/devices.sh@146 -- # pv1=nvme0n1p2 00:03:25.431 04:00:39 -- setup/devices.sh@148 -- # partition_drive nvme0n1 00:03:25.431 04:00:39 -- setup/common.sh@39 -- # local disk=nvme0n1 00:03:25.431 04:00:39 -- setup/common.sh@40 -- # local part_no=2 00:03:25.431 04:00:39 -- setup/common.sh@41 -- # local size=1073741824 00:03:25.431 04:00:39 -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:03:25.431 04:00:39 -- setup/common.sh@44 -- # parts=() 00:03:25.431 04:00:39 -- setup/common.sh@44 -- # local parts 00:03:25.431 04:00:39 -- setup/common.sh@46 -- # (( part = 1 )) 00:03:25.431 04:00:39 -- setup/common.sh@46 -- # (( part <= part_no )) 00:03:25.431 04:00:39 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:03:25.431 04:00:39 -- setup/common.sh@46 -- # (( part++ )) 00:03:25.431 04:00:39 -- setup/common.sh@46 -- # (( part <= part_no )) 00:03:25.431 04:00:39 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:03:25.431 04:00:39 -- setup/common.sh@46 -- # (( part++ )) 00:03:25.431 04:00:39 -- setup/common.sh@46 -- # (( part <= part_no )) 00:03:25.431 04:00:39 -- setup/common.sh@51 -- # (( size /= 512 )) 00:03:25.431 04:00:39 -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:03:25.431 04:00:39 -- setup/common.sh@53 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 nvme0n1p2 00:03:26.369 Creating new GPT entries in memory. 00:03:26.369 GPT data structures destroyed! You may now partition the disk using fdisk or 00:03:26.369 other utilities. 00:03:26.369 04:00:40 -- setup/common.sh@57 -- # (( part = 1 )) 00:03:26.369 04:00:40 -- setup/common.sh@57 -- # (( part <= part_no )) 00:03:26.369 04:00:40 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:03:26.369 04:00:40 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:03:26.369 04:00:40 -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:03:27.749 Creating new GPT entries in memory. 00:03:27.749 The operation has completed successfully. 00:03:27.749 04:00:41 -- setup/common.sh@57 -- # (( part++ )) 00:03:27.749 04:00:41 -- setup/common.sh@57 -- # (( part <= part_no )) 00:03:27.749 04:00:41 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:03:27.749 04:00:41 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:03:27.749 04:00:41 -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=2:2099200:4196351 00:03:28.689 The operation has completed successfully. 00:03:28.689 04:00:42 -- setup/common.sh@57 -- # (( part++ )) 00:03:28.689 04:00:42 -- setup/common.sh@57 -- # (( part <= part_no )) 00:03:28.689 04:00:42 -- setup/common.sh@62 -- # wait 3773977 00:03:28.689 04:00:43 -- setup/devices.sh@150 -- # dm_name=nvme_dm_test 00:03:28.689 04:00:43 -- setup/devices.sh@151 -- # dm_mount=/var/jenkins/workspace/dsa-phy-autotest/spdk/test/setup/dm_mount 00:03:28.689 04:00:43 -- setup/devices.sh@152 -- # dm_dummy_test_file=/var/jenkins/workspace/dsa-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:03:28.689 04:00:43 -- setup/devices.sh@155 -- # dmsetup create nvme_dm_test 00:03:28.689 04:00:43 -- setup/devices.sh@160 -- # for t in {1..5} 00:03:28.689 04:00:43 -- setup/devices.sh@161 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:03:28.689 04:00:43 -- setup/devices.sh@161 -- # break 00:03:28.689 04:00:43 -- setup/devices.sh@164 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:03:28.689 04:00:43 -- setup/devices.sh@165 -- # readlink -f /dev/mapper/nvme_dm_test 00:03:28.689 04:00:43 -- setup/devices.sh@165 -- # dm=/dev/dm-0 00:03:28.689 04:00:43 -- setup/devices.sh@166 -- # dm=dm-0 00:03:28.689 04:00:43 -- setup/devices.sh@168 -- # [[ -e /sys/class/block/nvme0n1p1/holders/dm-0 ]] 00:03:28.689 04:00:43 -- setup/devices.sh@169 -- # [[ -e /sys/class/block/nvme0n1p2/holders/dm-0 ]] 00:03:28.689 04:00:43 -- setup/devices.sh@171 -- # mkfs /dev/mapper/nvme_dm_test /var/jenkins/workspace/dsa-phy-autotest/spdk/test/setup/dm_mount 00:03:28.689 04:00:43 -- setup/common.sh@66 -- # local dev=/dev/mapper/nvme_dm_test mount=/var/jenkins/workspace/dsa-phy-autotest/spdk/test/setup/dm_mount size= 00:03:28.689 04:00:43 -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/dsa-phy-autotest/spdk/test/setup/dm_mount 00:03:28.689 04:00:43 -- setup/common.sh@70 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:03:28.689 04:00:43 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/mapper/nvme_dm_test 00:03:28.689 04:00:43 -- setup/common.sh@72 -- # mount /dev/mapper/nvme_dm_test /var/jenkins/workspace/dsa-phy-autotest/spdk/test/setup/dm_mount 00:03:28.689 04:00:43 -- setup/devices.sh@174 -- # verify 0000:c9:00.0 nvme0n1:nvme_dm_test /var/jenkins/workspace/dsa-phy-autotest/spdk/test/setup/dm_mount /var/jenkins/workspace/dsa-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:03:28.689 04:00:43 -- setup/devices.sh@48 -- # local dev=0000:c9:00.0 00:03:28.689 04:00:43 -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme_dm_test 00:03:28.689 04:00:43 -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/dsa-phy-autotest/spdk/test/setup/dm_mount 00:03:28.689 04:00:43 -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/dsa-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:03:28.689 04:00:43 -- setup/devices.sh@53 -- # local found=0 00:03:28.689 04:00:43 -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/dsa-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:03:28.689 04:00:43 -- setup/devices.sh@56 -- # : 00:03:28.689 04:00:43 -- setup/devices.sh@59 -- # local pci status 00:03:28.689 04:00:43 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:28.689 04:00:43 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:c9:00.0 00:03:28.689 04:00:43 -- setup/devices.sh@47 -- # setup output config 00:03:28.689 04:00:43 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:28.689 04:00:43 -- setup/common.sh@10 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/setup.sh config 00:03:31.225 04:00:45 -- setup/devices.sh@62 -- # [[ 0000:c9:00.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:03:31.225 04:00:45 -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0,mount@nvme0n1:nvme_dm_test, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\_\d\m\_\t\e\s\t* ]] 00:03:31.225 04:00:45 -- setup/devices.sh@63 -- # found=1 00:03:31.225 04:00:45 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:31.225 04:00:45 -- setup/devices.sh@62 -- # [[ 0000:ca:00.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:03:31.225 04:00:45 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:31.225 04:00:45 -- setup/devices.sh@62 -- # [[ 0000:6a:01.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:03:31.225 04:00:45 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:31.225 04:00:45 -- setup/devices.sh@62 -- # [[ 0000:6f:01.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:03:31.225 04:00:45 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:31.225 04:00:45 -- setup/devices.sh@62 -- # [[ 0000:74:01.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:03:31.225 04:00:45 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:31.225 04:00:45 -- setup/devices.sh@62 -- # [[ 0000:79:01.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:03:31.225 04:00:45 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:31.225 04:00:45 -- setup/devices.sh@62 -- # [[ 0000:e7:01.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:03:31.225 04:00:45 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:31.225 04:00:45 -- setup/devices.sh@62 -- # [[ 0000:ec:01.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:03:31.225 04:00:45 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:31.225 04:00:45 -- setup/devices.sh@62 -- # [[ 0000:f1:01.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:03:31.225 04:00:45 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:31.225 04:00:45 -- setup/devices.sh@62 -- # [[ 0000:f6:01.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:03:31.225 04:00:45 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:31.225 04:00:45 -- setup/devices.sh@62 -- # [[ 0000:6a:02.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:03:31.225 04:00:45 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:31.225 04:00:45 -- setup/devices.sh@62 -- # [[ 0000:6f:02.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:03:31.225 04:00:45 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:31.225 04:00:45 -- setup/devices.sh@62 -- # [[ 0000:74:02.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:03:31.225 04:00:45 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:31.225 04:00:45 -- setup/devices.sh@62 -- # [[ 0000:79:02.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:03:31.225 04:00:45 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:31.225 04:00:45 -- setup/devices.sh@62 -- # [[ 0000:e7:02.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:03:31.225 04:00:45 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:31.225 04:00:45 -- setup/devices.sh@62 -- # [[ 0000:ec:02.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:03:31.225 04:00:45 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:31.225 04:00:45 -- setup/devices.sh@62 -- # [[ 0000:f1:02.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:03:31.225 04:00:45 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:31.225 04:00:45 -- setup/devices.sh@62 -- # [[ 0000:f6:02.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:03:31.225 04:00:45 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:31.485 04:00:45 -- setup/devices.sh@66 -- # (( found == 1 )) 00:03:31.485 04:00:45 -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/dsa-phy-autotest/spdk/test/setup/dm_mount ]] 00:03:31.485 04:00:45 -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/dsa-phy-autotest/spdk/test/setup/dm_mount 00:03:31.485 04:00:45 -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/dsa-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:03:31.485 04:00:45 -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/dsa-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:03:31.485 04:00:45 -- setup/devices.sh@182 -- # umount /var/jenkins/workspace/dsa-phy-autotest/spdk/test/setup/dm_mount 00:03:31.485 04:00:46 -- setup/devices.sh@184 -- # verify 0000:c9:00.0 holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 '' '' 00:03:31.485 04:00:46 -- setup/devices.sh@48 -- # local dev=0000:c9:00.0 00:03:31.485 04:00:46 -- setup/devices.sh@49 -- # local mounts=holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 00:03:31.485 04:00:46 -- setup/devices.sh@50 -- # local mount_point= 00:03:31.485 04:00:46 -- setup/devices.sh@51 -- # local test_file= 00:03:31.485 04:00:46 -- setup/devices.sh@53 -- # local found=0 00:03:31.485 04:00:46 -- setup/devices.sh@55 -- # [[ -n '' ]] 00:03:31.485 04:00:46 -- setup/devices.sh@59 -- # local pci status 00:03:31.485 04:00:46 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:31.485 04:00:46 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:c9:00.0 00:03:31.485 04:00:46 -- setup/devices.sh@47 -- # setup output config 00:03:31.485 04:00:46 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:31.485 04:00:46 -- setup/common.sh@10 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/setup.sh config 00:03:34.025 04:00:48 -- setup/devices.sh@62 -- # [[ 0000:c9:00.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:03:34.025 04:00:48 -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\1\:\d\m\-\0\,\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\2\:\d\m\-\0* ]] 00:03:34.025 04:00:48 -- setup/devices.sh@63 -- # found=1 00:03:34.025 04:00:48 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:34.025 04:00:48 -- setup/devices.sh@62 -- # [[ 0000:ca:00.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:03:34.025 04:00:48 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:34.025 04:00:48 -- setup/devices.sh@62 -- # [[ 0000:6a:01.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:03:34.025 04:00:48 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:34.025 04:00:48 -- setup/devices.sh@62 -- # [[ 0000:6f:01.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:03:34.025 04:00:48 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:34.025 04:00:48 -- setup/devices.sh@62 -- # [[ 0000:74:01.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:03:34.025 04:00:48 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:34.025 04:00:48 -- setup/devices.sh@62 -- # [[ 0000:79:01.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:03:34.025 04:00:48 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:34.025 04:00:48 -- setup/devices.sh@62 -- # [[ 0000:e7:01.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:03:34.025 04:00:48 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:34.025 04:00:48 -- setup/devices.sh@62 -- # [[ 0000:ec:01.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:03:34.025 04:00:48 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:34.025 04:00:48 -- setup/devices.sh@62 -- # [[ 0000:f1:01.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:03:34.025 04:00:48 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:34.025 04:00:48 -- setup/devices.sh@62 -- # [[ 0000:f6:01.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:03:34.025 04:00:48 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:34.025 04:00:48 -- setup/devices.sh@62 -- # [[ 0000:6a:02.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:03:34.025 04:00:48 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:34.025 04:00:48 -- setup/devices.sh@62 -- # [[ 0000:6f:02.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:03:34.284 04:00:48 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:34.284 04:00:48 -- setup/devices.sh@62 -- # [[ 0000:74:02.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:03:34.284 04:00:48 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:34.284 04:00:48 -- setup/devices.sh@62 -- # [[ 0000:79:02.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:03:34.284 04:00:48 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:34.284 04:00:48 -- setup/devices.sh@62 -- # [[ 0000:e7:02.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:03:34.284 04:00:48 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:34.284 04:00:48 -- setup/devices.sh@62 -- # [[ 0000:ec:02.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:03:34.284 04:00:48 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:34.284 04:00:48 -- setup/devices.sh@62 -- # [[ 0000:f1:02.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:03:34.284 04:00:48 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:34.284 04:00:48 -- setup/devices.sh@62 -- # [[ 0000:f6:02.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:03:34.284 04:00:48 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:34.284 04:00:48 -- setup/devices.sh@66 -- # (( found == 1 )) 00:03:34.284 04:00:48 -- setup/devices.sh@68 -- # [[ -n '' ]] 00:03:34.284 04:00:48 -- setup/devices.sh@68 -- # return 0 00:03:34.284 04:00:48 -- setup/devices.sh@187 -- # cleanup_dm 00:03:34.284 04:00:48 -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/dsa-phy-autotest/spdk/test/setup/dm_mount 00:03:34.284 04:00:48 -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:03:34.284 04:00:48 -- setup/devices.sh@37 -- # dmsetup remove --force nvme_dm_test 00:03:34.544 04:00:48 -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:03:34.544 04:00:48 -- setup/devices.sh@40 -- # wipefs --all /dev/nvme0n1p1 00:03:34.544 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:03:34.544 04:00:48 -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:03:34.544 04:00:48 -- setup/devices.sh@43 -- # wipefs --all /dev/nvme0n1p2 00:03:34.544 00:03:34.544 real 0m8.953s 00:03:34.544 user 0m1.872s 00:03:34.544 sys 0m3.695s 00:03:34.544 04:00:48 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:34.544 04:00:48 -- common/autotest_common.sh@10 -- # set +x 00:03:34.544 ************************************ 00:03:34.544 END TEST dm_mount 00:03:34.544 ************************************ 00:03:34.544 04:00:48 -- setup/devices.sh@1 -- # cleanup 00:03:34.544 04:00:48 -- setup/devices.sh@11 -- # cleanup_nvme 00:03:34.544 04:00:48 -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/dsa-phy-autotest/spdk/test/setup/nvme_mount 00:03:34.544 04:00:48 -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:03:34.544 04:00:48 -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:03:34.544 04:00:48 -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:03:34.544 04:00:48 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:03:34.805 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:03:34.805 /dev/nvme0n1: 8 bytes were erased at offset 0x1d1c1115e00 (gpt): 45 46 49 20 50 41 52 54 00:03:34.805 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:03:34.805 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:03:34.805 04:00:49 -- setup/devices.sh@12 -- # cleanup_dm 00:03:34.805 04:00:49 -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/dsa-phy-autotest/spdk/test/setup/dm_mount 00:03:34.805 04:00:49 -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:03:34.805 04:00:49 -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:03:34.805 04:00:49 -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:03:34.805 04:00:49 -- setup/devices.sh@14 -- # [[ -b /dev/nvme0n1 ]] 00:03:34.805 04:00:49 -- setup/devices.sh@15 -- # wipefs --all /dev/nvme0n1 00:03:34.805 00:03:34.805 real 0m24.155s 00:03:34.805 user 0m6.059s 00:03:34.805 sys 0m11.561s 00:03:34.805 04:00:49 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:34.805 04:00:49 -- common/autotest_common.sh@10 -- # set +x 00:03:34.805 ************************************ 00:03:34.805 END TEST devices 00:03:34.805 ************************************ 00:03:34.805 00:03:34.805 real 1m25.201s 00:03:34.805 user 0m22.586s 00:03:34.805 sys 0m42.930s 00:03:34.805 04:00:49 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:34.805 04:00:49 -- common/autotest_common.sh@10 -- # set +x 00:03:34.805 ************************************ 00:03:34.805 END TEST setup.sh 00:03:34.805 ************************************ 00:03:34.805 04:00:49 -- spdk/autotest.sh@139 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/setup.sh status 00:03:37.349 Hugepages 00:03:37.349 node hugesize free / total 00:03:37.349 node0 1048576kB 0 / 0 00:03:37.349 node0 2048kB 2048 / 2048 00:03:37.349 node1 1048576kB 0 / 0 00:03:37.349 node1 2048kB 0 / 0 00:03:37.349 00:03:37.349 Type BDF Vendor Device NUMA Driver Device Block devices 00:03:37.349 DSA 0000:6a:01.0 8086 0b25 0 idxd - - 00:03:37.349 IAA 0000:6a:02.0 8086 0cfe 0 idxd - - 00:03:37.349 DSA 0000:6f:01.0 8086 0b25 0 idxd - - 00:03:37.349 IAA 0000:6f:02.0 8086 0cfe 0 idxd - - 00:03:37.349 DSA 0000:74:01.0 8086 0b25 0 idxd - - 00:03:37.349 IAA 0000:74:02.0 8086 0cfe 0 idxd - - 00:03:37.349 DSA 0000:79:01.0 8086 0b25 0 idxd - - 00:03:37.349 IAA 0000:79:02.0 8086 0cfe 0 idxd - - 00:03:37.349 NVMe 0000:c9:00.0 8086 0a54 1 nvme nvme0 nvme0n1 00:03:37.349 NVMe 0000:ca:00.0 8086 0a54 1 nvme nvme1 nvme1n1 00:03:37.349 DSA 0000:e7:01.0 8086 0b25 1 idxd - - 00:03:37.349 IAA 0000:e7:02.0 8086 0cfe 1 idxd - - 00:03:37.349 DSA 0000:ec:01.0 8086 0b25 1 idxd - - 00:03:37.349 IAA 0000:ec:02.0 8086 0cfe 1 idxd - - 00:03:37.349 DSA 0000:f1:01.0 8086 0b25 1 idxd - - 00:03:37.349 IAA 0000:f1:02.0 8086 0cfe 1 idxd - - 00:03:37.349 DSA 0000:f6:01.0 8086 0b25 1 idxd - - 00:03:37.349 IAA 0000:f6:02.0 8086 0cfe 1 idxd - - 00:03:37.349 04:00:51 -- spdk/autotest.sh@141 -- # uname -s 00:03:37.349 04:00:51 -- spdk/autotest.sh@141 -- # [[ Linux == Linux ]] 00:03:37.349 04:00:51 -- spdk/autotest.sh@143 -- # nvme_namespace_revert 00:03:37.349 04:00:51 -- common/autotest_common.sh@1516 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/setup.sh 00:03:39.958 0000:74:02.0 (8086 0cfe): idxd -> vfio-pci 00:03:39.958 0000:f1:02.0 (8086 0cfe): idxd -> vfio-pci 00:03:39.958 0000:79:02.0 (8086 0cfe): idxd -> vfio-pci 00:03:39.958 0000:6f:01.0 (8086 0b25): idxd -> vfio-pci 00:03:40.220 0000:6f:02.0 (8086 0cfe): idxd -> vfio-pci 00:03:40.220 0000:f6:01.0 (8086 0b25): idxd -> vfio-pci 00:03:40.220 0000:f6:02.0 (8086 0cfe): idxd -> vfio-pci 00:03:40.220 0000:74:01.0 (8086 0b25): idxd -> vfio-pci 00:03:40.220 0000:6a:02.0 (8086 0cfe): idxd -> vfio-pci 00:03:40.220 0000:79:01.0 (8086 0b25): idxd -> vfio-pci 00:03:40.220 0000:ec:01.0 (8086 0b25): idxd -> vfio-pci 00:03:40.220 0000:6a:01.0 (8086 0b25): idxd -> vfio-pci 00:03:40.482 0000:ec:02.0 (8086 0cfe): idxd -> vfio-pci 00:03:40.482 0000:e7:01.0 (8086 0b25): idxd -> vfio-pci 00:03:40.482 0000:e7:02.0 (8086 0cfe): idxd -> vfio-pci 00:03:40.482 0000:f1:01.0 (8086 0b25): idxd -> vfio-pci 00:03:41.866 0000:c9:00.0 (8086 0a54): nvme -> vfio-pci 00:03:42.439 0000:ca:00.0 (8086 0a54): nvme -> vfio-pci 00:03:42.439 04:00:56 -- common/autotest_common.sh@1517 -- # sleep 1 00:03:43.381 04:00:57 -- common/autotest_common.sh@1518 -- # bdfs=() 00:03:43.381 04:00:57 -- common/autotest_common.sh@1518 -- # local bdfs 00:03:43.381 04:00:57 -- common/autotest_common.sh@1519 -- # bdfs=($(get_nvme_bdfs)) 00:03:43.642 04:00:57 -- common/autotest_common.sh@1519 -- # get_nvme_bdfs 00:03:43.642 04:00:57 -- common/autotest_common.sh@1498 -- # bdfs=() 00:03:43.642 04:00:57 -- common/autotest_common.sh@1498 -- # local bdfs 00:03:43.642 04:00:57 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:03:43.642 04:00:57 -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/gen_nvme.sh 00:03:43.642 04:00:57 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:03:43.642 04:00:58 -- common/autotest_common.sh@1500 -- # (( 2 == 0 )) 00:03:43.642 04:00:58 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:c9:00.0 0000:ca:00.0 00:03:43.642 04:00:58 -- common/autotest_common.sh@1521 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/setup.sh reset 00:03:46.184 Waiting for block devices as requested 00:03:46.184 0000:c9:00.0 (8086 0a54): vfio-pci -> nvme 00:03:46.184 0000:74:02.0 (8086 0cfe): vfio-pci -> idxd 00:03:46.184 0000:f1:02.0 (8086 0cfe): vfio-pci -> idxd 00:03:46.184 0000:79:02.0 (8086 0cfe): vfio-pci -> idxd 00:03:46.445 0000:6f:01.0 (8086 0b25): vfio-pci -> idxd 00:03:46.445 0000:6f:02.0 (8086 0cfe): vfio-pci -> idxd 00:03:46.445 0000:f6:01.0 (8086 0b25): vfio-pci -> idxd 00:03:46.445 0000:f6:02.0 (8086 0cfe): vfio-pci -> idxd 00:03:46.704 0000:74:01.0 (8086 0b25): vfio-pci -> idxd 00:03:46.704 0000:6a:02.0 (8086 0cfe): vfio-pci -> idxd 00:03:46.704 0000:79:01.0 (8086 0b25): vfio-pci -> idxd 00:03:46.964 0000:ec:01.0 (8086 0b25): vfio-pci -> idxd 00:03:46.964 0000:6a:01.0 (8086 0b25): vfio-pci -> idxd 00:03:46.964 0000:ca:00.0 (8086 0a54): vfio-pci -> nvme 00:03:46.964 0000:ec:02.0 (8086 0cfe): vfio-pci -> idxd 00:03:47.224 0000:e7:01.0 (8086 0b25): vfio-pci -> idxd 00:03:47.224 0000:e7:02.0 (8086 0cfe): vfio-pci -> idxd 00:03:47.224 0000:f1:01.0 (8086 0b25): vfio-pci -> idxd 00:03:47.484 04:01:01 -- common/autotest_common.sh@1523 -- # for bdf in "${bdfs[@]}" 00:03:47.484 04:01:01 -- common/autotest_common.sh@1524 -- # get_nvme_ctrlr_from_bdf 0000:c9:00.0 00:03:47.484 04:01:01 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:03:47.484 04:01:01 -- common/autotest_common.sh@1487 -- # grep 0000:c9:00.0/nvme/nvme 00:03:47.484 04:01:01 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:c7/0000:c7:03.0/0000:c9:00.0/nvme/nvme0 00:03:47.484 04:01:01 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:c7/0000:c7:03.0/0000:c9:00.0/nvme/nvme0 ]] 00:03:47.484 04:01:01 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:c7/0000:c7:03.0/0000:c9:00.0/nvme/nvme0 00:03:47.484 04:01:01 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme0 00:03:47.484 04:01:01 -- common/autotest_common.sh@1524 -- # nvme_ctrlr=/dev/nvme0 00:03:47.484 04:01:01 -- common/autotest_common.sh@1525 -- # [[ -z /dev/nvme0 ]] 00:03:47.484 04:01:01 -- common/autotest_common.sh@1530 -- # nvme id-ctrl /dev/nvme0 00:03:47.484 04:01:01 -- common/autotest_common.sh@1530 -- # grep oacs 00:03:47.484 04:01:01 -- common/autotest_common.sh@1530 -- # cut -d: -f2 00:03:47.484 04:01:01 -- common/autotest_common.sh@1530 -- # oacs=' 0xe' 00:03:47.484 04:01:01 -- common/autotest_common.sh@1531 -- # oacs_ns_manage=8 00:03:47.484 04:01:01 -- common/autotest_common.sh@1533 -- # [[ 8 -ne 0 ]] 00:03:47.484 04:01:01 -- common/autotest_common.sh@1539 -- # nvme id-ctrl /dev/nvme0 00:03:47.485 04:01:01 -- common/autotest_common.sh@1539 -- # cut -d: -f2 00:03:47.485 04:01:01 -- common/autotest_common.sh@1539 -- # grep unvmcap 00:03:47.485 04:01:01 -- common/autotest_common.sh@1539 -- # unvmcap=' 0' 00:03:47.485 04:01:01 -- common/autotest_common.sh@1540 -- # [[ 0 -eq 0 ]] 00:03:47.485 04:01:01 -- common/autotest_common.sh@1542 -- # continue 00:03:47.485 04:01:01 -- common/autotest_common.sh@1523 -- # for bdf in "${bdfs[@]}" 00:03:47.485 04:01:01 -- common/autotest_common.sh@1524 -- # get_nvme_ctrlr_from_bdf 0000:ca:00.0 00:03:47.485 04:01:01 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:03:47.485 04:01:01 -- common/autotest_common.sh@1487 -- # grep 0000:ca:00.0/nvme/nvme 00:03:47.485 04:01:01 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:c7/0000:c7:05.0/0000:ca:00.0/nvme/nvme1 00:03:47.485 04:01:01 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:c7/0000:c7:05.0/0000:ca:00.0/nvme/nvme1 ]] 00:03:47.485 04:01:01 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:c7/0000:c7:05.0/0000:ca:00.0/nvme/nvme1 00:03:47.485 04:01:01 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme1 00:03:47.485 04:01:01 -- common/autotest_common.sh@1524 -- # nvme_ctrlr=/dev/nvme1 00:03:47.485 04:01:01 -- common/autotest_common.sh@1525 -- # [[ -z /dev/nvme1 ]] 00:03:47.485 04:01:01 -- common/autotest_common.sh@1530 -- # nvme id-ctrl /dev/nvme1 00:03:47.485 04:01:01 -- common/autotest_common.sh@1530 -- # grep oacs 00:03:47.485 04:01:01 -- common/autotest_common.sh@1530 -- # cut -d: -f2 00:03:47.485 04:01:01 -- common/autotest_common.sh@1530 -- # oacs=' 0xe' 00:03:47.485 04:01:01 -- common/autotest_common.sh@1531 -- # oacs_ns_manage=8 00:03:47.485 04:01:01 -- common/autotest_common.sh@1533 -- # [[ 8 -ne 0 ]] 00:03:47.485 04:01:01 -- common/autotest_common.sh@1539 -- # grep unvmcap 00:03:47.485 04:01:01 -- common/autotest_common.sh@1539 -- # nvme id-ctrl /dev/nvme1 00:03:47.485 04:01:01 -- common/autotest_common.sh@1539 -- # cut -d: -f2 00:03:47.485 04:01:01 -- common/autotest_common.sh@1539 -- # unvmcap=' 0' 00:03:47.485 04:01:01 -- common/autotest_common.sh@1540 -- # [[ 0 -eq 0 ]] 00:03:47.485 04:01:01 -- common/autotest_common.sh@1542 -- # continue 00:03:47.485 04:01:01 -- spdk/autotest.sh@146 -- # timing_exit pre_cleanup 00:03:47.485 04:01:01 -- common/autotest_common.sh@718 -- # xtrace_disable 00:03:47.485 04:01:01 -- common/autotest_common.sh@10 -- # set +x 00:03:47.485 04:01:01 -- spdk/autotest.sh@149 -- # timing_enter afterboot 00:03:47.485 04:01:01 -- common/autotest_common.sh@712 -- # xtrace_disable 00:03:47.485 04:01:01 -- common/autotest_common.sh@10 -- # set +x 00:03:47.485 04:01:01 -- spdk/autotest.sh@150 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/setup.sh 00:03:50.780 0000:74:02.0 (8086 0cfe): idxd -> vfio-pci 00:03:50.780 0000:f1:02.0 (8086 0cfe): idxd -> vfio-pci 00:03:50.780 0000:79:02.0 (8086 0cfe): idxd -> vfio-pci 00:03:50.780 0000:6f:01.0 (8086 0b25): idxd -> vfio-pci 00:03:50.780 0000:6f:02.0 (8086 0cfe): idxd -> vfio-pci 00:03:50.780 0000:f6:01.0 (8086 0b25): idxd -> vfio-pci 00:03:50.780 0000:f6:02.0 (8086 0cfe): idxd -> vfio-pci 00:03:50.780 0000:74:01.0 (8086 0b25): idxd -> vfio-pci 00:03:50.780 0000:6a:02.0 (8086 0cfe): idxd -> vfio-pci 00:03:50.780 0000:79:01.0 (8086 0b25): idxd -> vfio-pci 00:03:50.780 0000:ec:01.0 (8086 0b25): idxd -> vfio-pci 00:03:50.780 0000:6a:01.0 (8086 0b25): idxd -> vfio-pci 00:03:50.780 0000:ec:02.0 (8086 0cfe): idxd -> vfio-pci 00:03:50.780 0000:e7:01.0 (8086 0b25): idxd -> vfio-pci 00:03:50.780 0000:e7:02.0 (8086 0cfe): idxd -> vfio-pci 00:03:50.780 0000:f1:01.0 (8086 0b25): idxd -> vfio-pci 00:03:52.166 0000:c9:00.0 (8086 0a54): nvme -> vfio-pci 00:03:52.737 0000:ca:00.0 (8086 0a54): nvme -> vfio-pci 00:03:52.737 04:01:07 -- spdk/autotest.sh@151 -- # timing_exit afterboot 00:03:52.737 04:01:07 -- common/autotest_common.sh@718 -- # xtrace_disable 00:03:52.737 04:01:07 -- common/autotest_common.sh@10 -- # set +x 00:03:52.737 04:01:07 -- spdk/autotest.sh@155 -- # opal_revert_cleanup 00:03:52.737 04:01:07 -- common/autotest_common.sh@1576 -- # mapfile -t bdfs 00:03:52.737 04:01:07 -- common/autotest_common.sh@1576 -- # get_nvme_bdfs_by_id 0x0a54 00:03:52.737 04:01:07 -- common/autotest_common.sh@1562 -- # bdfs=() 00:03:52.737 04:01:07 -- common/autotest_common.sh@1562 -- # local bdfs 00:03:52.737 04:01:07 -- common/autotest_common.sh@1564 -- # get_nvme_bdfs 00:03:52.737 04:01:07 -- common/autotest_common.sh@1498 -- # bdfs=() 00:03:52.737 04:01:07 -- common/autotest_common.sh@1498 -- # local bdfs 00:03:52.737 04:01:07 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:03:52.737 04:01:07 -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/gen_nvme.sh 00:03:52.737 04:01:07 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:03:52.996 04:01:07 -- common/autotest_common.sh@1500 -- # (( 2 == 0 )) 00:03:52.996 04:01:07 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:c9:00.0 0000:ca:00.0 00:03:52.996 04:01:07 -- common/autotest_common.sh@1564 -- # for bdf in $(get_nvme_bdfs) 00:03:52.996 04:01:07 -- common/autotest_common.sh@1565 -- # cat /sys/bus/pci/devices/0000:c9:00.0/device 00:03:52.996 04:01:07 -- common/autotest_common.sh@1565 -- # device=0x0a54 00:03:52.996 04:01:07 -- common/autotest_common.sh@1566 -- # [[ 0x0a54 == \0\x\0\a\5\4 ]] 00:03:52.996 04:01:07 -- common/autotest_common.sh@1567 -- # bdfs+=($bdf) 00:03:52.996 04:01:07 -- common/autotest_common.sh@1564 -- # for bdf in $(get_nvme_bdfs) 00:03:52.996 04:01:07 -- common/autotest_common.sh@1565 -- # cat /sys/bus/pci/devices/0000:ca:00.0/device 00:03:52.996 04:01:07 -- common/autotest_common.sh@1565 -- # device=0x0a54 00:03:52.996 04:01:07 -- common/autotest_common.sh@1566 -- # [[ 0x0a54 == \0\x\0\a\5\4 ]] 00:03:52.996 04:01:07 -- common/autotest_common.sh@1567 -- # bdfs+=($bdf) 00:03:52.996 04:01:07 -- common/autotest_common.sh@1571 -- # printf '%s\n' 0000:c9:00.0 0000:ca:00.0 00:03:52.996 04:01:07 -- common/autotest_common.sh@1577 -- # [[ -z 0000:c9:00.0 ]] 00:03:52.996 04:01:07 -- common/autotest_common.sh@1582 -- # spdk_tgt_pid=3784548 00:03:52.996 04:01:07 -- common/autotest_common.sh@1583 -- # waitforlisten 3784548 00:03:52.996 04:01:07 -- common/autotest_common.sh@819 -- # '[' -z 3784548 ']' 00:03:52.997 04:01:07 -- common/autotest_common.sh@1581 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/spdk_tgt 00:03:52.997 04:01:07 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:03:52.997 04:01:07 -- common/autotest_common.sh@824 -- # local max_retries=100 00:03:52.997 04:01:07 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:03:52.997 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:03:52.997 04:01:07 -- common/autotest_common.sh@828 -- # xtrace_disable 00:03:52.997 04:01:07 -- common/autotest_common.sh@10 -- # set +x 00:03:52.997 [2024-05-14 04:01:07.428566] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:03:52.997 [2024-05-14 04:01:07.428680] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3784548 ] 00:03:52.997 EAL: No free 2048 kB hugepages reported on node 1 00:03:52.997 [2024-05-14 04:01:07.551581] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:53.256 [2024-05-14 04:01:07.649025] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:03:53.256 [2024-05-14 04:01:07.649213] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:03:53.826 04:01:08 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:03:53.826 04:01:08 -- common/autotest_common.sh@852 -- # return 0 00:03:53.826 04:01:08 -- common/autotest_common.sh@1585 -- # bdf_id=0 00:03:53.826 04:01:08 -- common/autotest_common.sh@1586 -- # for bdf in "${bdfs[@]}" 00:03:53.826 04:01:08 -- common/autotest_common.sh@1587 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t pcie -a 0000:c9:00.0 00:03:57.121 nvme0n1 00:03:57.121 04:01:11 -- common/autotest_common.sh@1589 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_nvme_opal_revert -b nvme0 -p test 00:03:57.121 [2024-05-14 04:01:11.213046] vbdev_opal_rpc.c: 125:rpc_bdev_nvme_opal_revert: *ERROR*: nvme0 not support opal 00:03:57.121 request: 00:03:57.121 { 00:03:57.121 "nvme_ctrlr_name": "nvme0", 00:03:57.121 "password": "test", 00:03:57.121 "method": "bdev_nvme_opal_revert", 00:03:57.121 "req_id": 1 00:03:57.121 } 00:03:57.121 Got JSON-RPC error response 00:03:57.121 response: 00:03:57.121 { 00:03:57.121 "code": -32602, 00:03:57.121 "message": "Invalid parameters" 00:03:57.121 } 00:03:57.121 04:01:11 -- common/autotest_common.sh@1589 -- # true 00:03:57.121 04:01:11 -- common/autotest_common.sh@1590 -- # (( ++bdf_id )) 00:03:57.121 04:01:11 -- common/autotest_common.sh@1586 -- # for bdf in "${bdfs[@]}" 00:03:57.121 04:01:11 -- common/autotest_common.sh@1587 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme1 -t pcie -a 0000:ca:00.0 00:03:59.740 nvme1n1 00:03:59.740 04:01:14 -- common/autotest_common.sh@1589 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_nvme_opal_revert -b nvme1 -p test 00:04:00.003 [2024-05-14 04:01:14.308917] vbdev_opal_rpc.c: 125:rpc_bdev_nvme_opal_revert: *ERROR*: nvme1 not support opal 00:04:00.003 request: 00:04:00.003 { 00:04:00.003 "nvme_ctrlr_name": "nvme1", 00:04:00.003 "password": "test", 00:04:00.003 "method": "bdev_nvme_opal_revert", 00:04:00.003 "req_id": 1 00:04:00.003 } 00:04:00.003 Got JSON-RPC error response 00:04:00.003 response: 00:04:00.003 { 00:04:00.003 "code": -32602, 00:04:00.003 "message": "Invalid parameters" 00:04:00.003 } 00:04:00.003 04:01:14 -- common/autotest_common.sh@1589 -- # true 00:04:00.003 04:01:14 -- common/autotest_common.sh@1590 -- # (( ++bdf_id )) 00:04:00.003 04:01:14 -- common/autotest_common.sh@1593 -- # killprocess 3784548 00:04:00.003 04:01:14 -- common/autotest_common.sh@926 -- # '[' -z 3784548 ']' 00:04:00.003 04:01:14 -- common/autotest_common.sh@930 -- # kill -0 3784548 00:04:00.003 04:01:14 -- common/autotest_common.sh@931 -- # uname 00:04:00.003 04:01:14 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:04:00.003 04:01:14 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3784548 00:04:00.003 04:01:14 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:04:00.003 04:01:14 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:04:00.003 04:01:14 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3784548' 00:04:00.003 killing process with pid 3784548 00:04:00.003 04:01:14 -- common/autotest_common.sh@945 -- # kill 3784548 00:04:00.003 04:01:14 -- common/autotest_common.sh@950 -- # wait 3784548 00:04:03.302 04:01:17 -- spdk/autotest.sh@161 -- # '[' 0 -eq 1 ']' 00:04:03.302 04:01:17 -- spdk/autotest.sh@165 -- # '[' 1 -eq 1 ']' 00:04:03.302 04:01:17 -- spdk/autotest.sh@166 -- # [[ 0 -eq 1 ]] 00:04:03.302 04:01:17 -- spdk/autotest.sh@166 -- # [[ 0 -eq 1 ]] 00:04:03.302 04:01:17 -- spdk/autotest.sh@173 -- # timing_enter lib 00:04:03.302 04:01:17 -- common/autotest_common.sh@712 -- # xtrace_disable 00:04:03.302 04:01:17 -- common/autotest_common.sh@10 -- # set +x 00:04:03.302 04:01:17 -- spdk/autotest.sh@175 -- # run_test env /var/jenkins/workspace/dsa-phy-autotest/spdk/test/env/env.sh 00:04:03.302 04:01:17 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:03.302 04:01:17 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:03.302 04:01:17 -- common/autotest_common.sh@10 -- # set +x 00:04:03.302 ************************************ 00:04:03.302 START TEST env 00:04:03.302 ************************************ 00:04:03.302 04:01:17 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/env/env.sh 00:04:03.302 * Looking for test storage... 00:04:03.302 * Found test storage at /var/jenkins/workspace/dsa-phy-autotest/spdk/test/env 00:04:03.302 04:01:17 -- env/env.sh@10 -- # run_test env_memory /var/jenkins/workspace/dsa-phy-autotest/spdk/test/env/memory/memory_ut 00:04:03.302 04:01:17 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:03.302 04:01:17 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:03.302 04:01:17 -- common/autotest_common.sh@10 -- # set +x 00:04:03.302 ************************************ 00:04:03.302 START TEST env_memory 00:04:03.302 ************************************ 00:04:03.302 04:01:17 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/env/memory/memory_ut 00:04:03.302 00:04:03.302 00:04:03.302 CUnit - A unit testing framework for C - Version 2.1-3 00:04:03.302 http://cunit.sourceforge.net/ 00:04:03.302 00:04:03.302 00:04:03.302 Suite: memory 00:04:03.563 Test: alloc and free memory map ...[2024-05-14 04:01:17.925922] /var/jenkins/workspace/dsa-phy-autotest/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:04:03.563 passed 00:04:03.563 Test: mem map translation ...[2024-05-14 04:01:17.973666] /var/jenkins/workspace/dsa-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:04:03.563 [2024-05-14 04:01:17.973709] /var/jenkins/workspace/dsa-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:04:03.563 [2024-05-14 04:01:17.973797] /var/jenkins/workspace/dsa-phy-autotest/spdk/lib/env_dpdk/memory.c: 584:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:04:03.563 [2024-05-14 04:01:17.973825] /var/jenkins/workspace/dsa-phy-autotest/spdk/lib/env_dpdk/memory.c: 600:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:04:03.563 passed 00:04:03.563 Test: mem map registration ...[2024-05-14 04:01:18.060551] /var/jenkins/workspace/dsa-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x200000 len=1234 00:04:03.563 [2024-05-14 04:01:18.060589] /var/jenkins/workspace/dsa-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x4d2 len=2097152 00:04:03.563 passed 00:04:03.825 Test: mem map adjacent registrations ...passed 00:04:03.825 00:04:03.825 Run Summary: Type Total Ran Passed Failed Inactive 00:04:03.825 suites 1 1 n/a 0 0 00:04:03.825 tests 4 4 4 0 0 00:04:03.825 asserts 152 152 152 0 n/a 00:04:03.825 00:04:03.825 Elapsed time = 0.294 seconds 00:04:03.825 00:04:03.825 real 0m0.321s 00:04:03.825 user 0m0.292s 00:04:03.825 sys 0m0.026s 00:04:03.825 04:01:18 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:03.825 04:01:18 -- common/autotest_common.sh@10 -- # set +x 00:04:03.825 ************************************ 00:04:03.825 END TEST env_memory 00:04:03.825 ************************************ 00:04:03.825 04:01:18 -- env/env.sh@11 -- # run_test env_vtophys /var/jenkins/workspace/dsa-phy-autotest/spdk/test/env/vtophys/vtophys 00:04:03.825 04:01:18 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:03.825 04:01:18 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:03.825 04:01:18 -- common/autotest_common.sh@10 -- # set +x 00:04:03.825 ************************************ 00:04:03.825 START TEST env_vtophys 00:04:03.825 ************************************ 00:04:03.825 04:01:18 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/env/vtophys/vtophys 00:04:03.825 EAL: lib.eal log level changed from notice to debug 00:04:03.825 EAL: Detected lcore 0 as core 0 on socket 0 00:04:03.825 EAL: Detected lcore 1 as core 1 on socket 0 00:04:03.825 EAL: Detected lcore 2 as core 2 on socket 0 00:04:03.825 EAL: Detected lcore 3 as core 3 on socket 0 00:04:03.825 EAL: Detected lcore 4 as core 4 on socket 0 00:04:03.825 EAL: Detected lcore 5 as core 5 on socket 0 00:04:03.825 EAL: Detected lcore 6 as core 6 on socket 0 00:04:03.825 EAL: Detected lcore 7 as core 7 on socket 0 00:04:03.825 EAL: Detected lcore 8 as core 8 on socket 0 00:04:03.825 EAL: Detected lcore 9 as core 9 on socket 0 00:04:03.825 EAL: Detected lcore 10 as core 10 on socket 0 00:04:03.825 EAL: Detected lcore 11 as core 11 on socket 0 00:04:03.825 EAL: Detected lcore 12 as core 12 on socket 0 00:04:03.825 EAL: Detected lcore 13 as core 13 on socket 0 00:04:03.825 EAL: Detected lcore 14 as core 14 on socket 0 00:04:03.825 EAL: Detected lcore 15 as core 15 on socket 0 00:04:03.825 EAL: Detected lcore 16 as core 16 on socket 0 00:04:03.825 EAL: Detected lcore 17 as core 17 on socket 0 00:04:03.825 EAL: Detected lcore 18 as core 18 on socket 0 00:04:03.825 EAL: Detected lcore 19 as core 19 on socket 0 00:04:03.825 EAL: Detected lcore 20 as core 20 on socket 0 00:04:03.825 EAL: Detected lcore 21 as core 21 on socket 0 00:04:03.825 EAL: Detected lcore 22 as core 22 on socket 0 00:04:03.825 EAL: Detected lcore 23 as core 23 on socket 0 00:04:03.825 EAL: Detected lcore 24 as core 24 on socket 0 00:04:03.825 EAL: Detected lcore 25 as core 25 on socket 0 00:04:03.825 EAL: Detected lcore 26 as core 26 on socket 0 00:04:03.825 EAL: Detected lcore 27 as core 27 on socket 0 00:04:03.825 EAL: Detected lcore 28 as core 28 on socket 0 00:04:03.825 EAL: Detected lcore 29 as core 29 on socket 0 00:04:03.825 EAL: Detected lcore 30 as core 30 on socket 0 00:04:03.825 EAL: Detected lcore 31 as core 31 on socket 0 00:04:03.825 EAL: Detected lcore 32 as core 0 on socket 1 00:04:03.825 EAL: Detected lcore 33 as core 1 on socket 1 00:04:03.825 EAL: Detected lcore 34 as core 2 on socket 1 00:04:03.825 EAL: Detected lcore 35 as core 3 on socket 1 00:04:03.825 EAL: Detected lcore 36 as core 4 on socket 1 00:04:03.825 EAL: Detected lcore 37 as core 5 on socket 1 00:04:03.825 EAL: Detected lcore 38 as core 6 on socket 1 00:04:03.825 EAL: Detected lcore 39 as core 7 on socket 1 00:04:03.825 EAL: Detected lcore 40 as core 8 on socket 1 00:04:03.825 EAL: Detected lcore 41 as core 9 on socket 1 00:04:03.825 EAL: Detected lcore 42 as core 10 on socket 1 00:04:03.825 EAL: Detected lcore 43 as core 11 on socket 1 00:04:03.825 EAL: Detected lcore 44 as core 12 on socket 1 00:04:03.825 EAL: Detected lcore 45 as core 13 on socket 1 00:04:03.825 EAL: Detected lcore 46 as core 14 on socket 1 00:04:03.825 EAL: Detected lcore 47 as core 15 on socket 1 00:04:03.825 EAL: Detected lcore 48 as core 16 on socket 1 00:04:03.825 EAL: Detected lcore 49 as core 17 on socket 1 00:04:03.825 EAL: Detected lcore 50 as core 18 on socket 1 00:04:03.825 EAL: Detected lcore 51 as core 19 on socket 1 00:04:03.825 EAL: Detected lcore 52 as core 20 on socket 1 00:04:03.825 EAL: Detected lcore 53 as core 21 on socket 1 00:04:03.825 EAL: Detected lcore 54 as core 22 on socket 1 00:04:03.825 EAL: Detected lcore 55 as core 23 on socket 1 00:04:03.825 EAL: Detected lcore 56 as core 24 on socket 1 00:04:03.825 EAL: Detected lcore 57 as core 25 on socket 1 00:04:03.825 EAL: Detected lcore 58 as core 26 on socket 1 00:04:03.825 EAL: Detected lcore 59 as core 27 on socket 1 00:04:03.825 EAL: Detected lcore 60 as core 28 on socket 1 00:04:03.825 EAL: Detected lcore 61 as core 29 on socket 1 00:04:03.825 EAL: Detected lcore 62 as core 30 on socket 1 00:04:03.825 EAL: Detected lcore 63 as core 31 on socket 1 00:04:03.825 EAL: Detected lcore 64 as core 0 on socket 0 00:04:03.825 EAL: Detected lcore 65 as core 1 on socket 0 00:04:03.825 EAL: Detected lcore 66 as core 2 on socket 0 00:04:03.825 EAL: Detected lcore 67 as core 3 on socket 0 00:04:03.825 EAL: Detected lcore 68 as core 4 on socket 0 00:04:03.825 EAL: Detected lcore 69 as core 5 on socket 0 00:04:03.825 EAL: Detected lcore 70 as core 6 on socket 0 00:04:03.825 EAL: Detected lcore 71 as core 7 on socket 0 00:04:03.825 EAL: Detected lcore 72 as core 8 on socket 0 00:04:03.825 EAL: Detected lcore 73 as core 9 on socket 0 00:04:03.825 EAL: Detected lcore 74 as core 10 on socket 0 00:04:03.825 EAL: Detected lcore 75 as core 11 on socket 0 00:04:03.825 EAL: Detected lcore 76 as core 12 on socket 0 00:04:03.825 EAL: Detected lcore 77 as core 13 on socket 0 00:04:03.825 EAL: Detected lcore 78 as core 14 on socket 0 00:04:03.825 EAL: Detected lcore 79 as core 15 on socket 0 00:04:03.825 EAL: Detected lcore 80 as core 16 on socket 0 00:04:03.825 EAL: Detected lcore 81 as core 17 on socket 0 00:04:03.825 EAL: Detected lcore 82 as core 18 on socket 0 00:04:03.825 EAL: Detected lcore 83 as core 19 on socket 0 00:04:03.825 EAL: Detected lcore 84 as core 20 on socket 0 00:04:03.825 EAL: Detected lcore 85 as core 21 on socket 0 00:04:03.825 EAL: Detected lcore 86 as core 22 on socket 0 00:04:03.825 EAL: Detected lcore 87 as core 23 on socket 0 00:04:03.825 EAL: Detected lcore 88 as core 24 on socket 0 00:04:03.825 EAL: Detected lcore 89 as core 25 on socket 0 00:04:03.825 EAL: Detected lcore 90 as core 26 on socket 0 00:04:03.825 EAL: Detected lcore 91 as core 27 on socket 0 00:04:03.825 EAL: Detected lcore 92 as core 28 on socket 0 00:04:03.825 EAL: Detected lcore 93 as core 29 on socket 0 00:04:03.825 EAL: Detected lcore 94 as core 30 on socket 0 00:04:03.825 EAL: Detected lcore 95 as core 31 on socket 0 00:04:03.826 EAL: Detected lcore 96 as core 0 on socket 1 00:04:03.826 EAL: Detected lcore 97 as core 1 on socket 1 00:04:03.826 EAL: Detected lcore 98 as core 2 on socket 1 00:04:03.826 EAL: Detected lcore 99 as core 3 on socket 1 00:04:03.826 EAL: Detected lcore 100 as core 4 on socket 1 00:04:03.826 EAL: Detected lcore 101 as core 5 on socket 1 00:04:03.826 EAL: Detected lcore 102 as core 6 on socket 1 00:04:03.826 EAL: Detected lcore 103 as core 7 on socket 1 00:04:03.826 EAL: Detected lcore 104 as core 8 on socket 1 00:04:03.826 EAL: Detected lcore 105 as core 9 on socket 1 00:04:03.826 EAL: Detected lcore 106 as core 10 on socket 1 00:04:03.826 EAL: Detected lcore 107 as core 11 on socket 1 00:04:03.826 EAL: Detected lcore 108 as core 12 on socket 1 00:04:03.826 EAL: Detected lcore 109 as core 13 on socket 1 00:04:03.826 EAL: Detected lcore 110 as core 14 on socket 1 00:04:03.826 EAL: Detected lcore 111 as core 15 on socket 1 00:04:03.826 EAL: Detected lcore 112 as core 16 on socket 1 00:04:03.826 EAL: Detected lcore 113 as core 17 on socket 1 00:04:03.826 EAL: Detected lcore 114 as core 18 on socket 1 00:04:03.826 EAL: Detected lcore 115 as core 19 on socket 1 00:04:03.826 EAL: Detected lcore 116 as core 20 on socket 1 00:04:03.826 EAL: Detected lcore 117 as core 21 on socket 1 00:04:03.826 EAL: Detected lcore 118 as core 22 on socket 1 00:04:03.826 EAL: Detected lcore 119 as core 23 on socket 1 00:04:03.826 EAL: Detected lcore 120 as core 24 on socket 1 00:04:03.826 EAL: Detected lcore 121 as core 25 on socket 1 00:04:03.826 EAL: Detected lcore 122 as core 26 on socket 1 00:04:03.826 EAL: Detected lcore 123 as core 27 on socket 1 00:04:03.826 EAL: Detected lcore 124 as core 28 on socket 1 00:04:03.826 EAL: Detected lcore 125 as core 29 on socket 1 00:04:03.826 EAL: Detected lcore 126 as core 30 on socket 1 00:04:03.826 EAL: Detected lcore 127 as core 31 on socket 1 00:04:03.826 EAL: Maximum logical cores by configuration: 128 00:04:03.826 EAL: Detected CPU lcores: 128 00:04:03.826 EAL: Detected NUMA nodes: 2 00:04:03.826 EAL: Checking presence of .so 'librte_eal.so.24.0' 00:04:03.826 EAL: Detected shared linkage of DPDK 00:04:03.826 EAL: No shared files mode enabled, IPC will be disabled 00:04:03.826 EAL: Bus pci wants IOVA as 'DC' 00:04:03.826 EAL: Buses did not request a specific IOVA mode. 00:04:03.826 EAL: IOMMU is available, selecting IOVA as VA mode. 00:04:03.826 EAL: Selected IOVA mode 'VA' 00:04:03.826 EAL: No free 2048 kB hugepages reported on node 1 00:04:03.826 EAL: Probing VFIO support... 00:04:03.826 EAL: IOMMU type 1 (Type 1) is supported 00:04:03.826 EAL: IOMMU type 7 (sPAPR) is not supported 00:04:03.826 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:04:03.826 EAL: VFIO support initialized 00:04:03.826 EAL: Ask a virtual area of 0x2e000 bytes 00:04:03.826 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:04:03.826 EAL: Setting up physically contiguous memory... 00:04:03.826 EAL: Setting maximum number of open files to 524288 00:04:03.826 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:04:03.826 EAL: Detected memory type: socket_id:1 hugepage_sz:2097152 00:04:03.826 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:04:03.826 EAL: Ask a virtual area of 0x61000 bytes 00:04:03.826 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:04:03.826 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:03.826 EAL: Ask a virtual area of 0x400000000 bytes 00:04:03.826 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:04:03.826 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:04:03.826 EAL: Ask a virtual area of 0x61000 bytes 00:04:03.826 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:04:03.826 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:03.826 EAL: Ask a virtual area of 0x400000000 bytes 00:04:03.826 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:04:03.826 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:04:03.826 EAL: Ask a virtual area of 0x61000 bytes 00:04:03.826 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:04:03.826 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:03.826 EAL: Ask a virtual area of 0x400000000 bytes 00:04:03.826 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:04:03.826 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:04:03.826 EAL: Ask a virtual area of 0x61000 bytes 00:04:03.826 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:04:03.826 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:03.826 EAL: Ask a virtual area of 0x400000000 bytes 00:04:03.826 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:04:03.826 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:04:03.826 EAL: Creating 4 segment lists: n_segs:8192 socket_id:1 hugepage_sz:2097152 00:04:03.826 EAL: Ask a virtual area of 0x61000 bytes 00:04:03.826 EAL: Virtual area found at 0x201000800000 (size = 0x61000) 00:04:03.826 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:03.826 EAL: Ask a virtual area of 0x400000000 bytes 00:04:03.826 EAL: Virtual area found at 0x201000a00000 (size = 0x400000000) 00:04:03.826 EAL: VA reserved for memseg list at 0x201000a00000, size 400000000 00:04:03.826 EAL: Ask a virtual area of 0x61000 bytes 00:04:03.826 EAL: Virtual area found at 0x201400a00000 (size = 0x61000) 00:04:03.826 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:03.826 EAL: Ask a virtual area of 0x400000000 bytes 00:04:03.826 EAL: Virtual area found at 0x201400c00000 (size = 0x400000000) 00:04:03.826 EAL: VA reserved for memseg list at 0x201400c00000, size 400000000 00:04:03.826 EAL: Ask a virtual area of 0x61000 bytes 00:04:03.826 EAL: Virtual area found at 0x201800c00000 (size = 0x61000) 00:04:03.826 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:03.826 EAL: Ask a virtual area of 0x400000000 bytes 00:04:03.826 EAL: Virtual area found at 0x201800e00000 (size = 0x400000000) 00:04:03.826 EAL: VA reserved for memseg list at 0x201800e00000, size 400000000 00:04:03.826 EAL: Ask a virtual area of 0x61000 bytes 00:04:03.826 EAL: Virtual area found at 0x201c00e00000 (size = 0x61000) 00:04:03.826 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:03.826 EAL: Ask a virtual area of 0x400000000 bytes 00:04:03.826 EAL: Virtual area found at 0x201c01000000 (size = 0x400000000) 00:04:03.826 EAL: VA reserved for memseg list at 0x201c01000000, size 400000000 00:04:03.826 EAL: Hugepages will be freed exactly as allocated. 00:04:03.826 EAL: No shared files mode enabled, IPC is disabled 00:04:03.826 EAL: No shared files mode enabled, IPC is disabled 00:04:03.826 EAL: TSC frequency is ~1900000 KHz 00:04:03.826 EAL: Main lcore 0 is ready (tid=7f7c4bd42a40;cpuset=[0]) 00:04:03.826 EAL: Trying to obtain current memory policy. 00:04:03.826 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:03.826 EAL: Restoring previous memory policy: 0 00:04:03.826 EAL: request: mp_malloc_sync 00:04:03.826 EAL: No shared files mode enabled, IPC is disabled 00:04:03.826 EAL: Heap on socket 0 was expanded by 2MB 00:04:03.826 EAL: No shared files mode enabled, IPC is disabled 00:04:03.826 EAL: No PCI address specified using 'addr=' in: bus=pci 00:04:03.826 EAL: Mem event callback 'spdk:(nil)' registered 00:04:03.826 00:04:03.826 00:04:03.826 CUnit - A unit testing framework for C - Version 2.1-3 00:04:03.826 http://cunit.sourceforge.net/ 00:04:03.826 00:04:03.826 00:04:03.826 Suite: components_suite 00:04:04.088 Test: vtophys_malloc_test ...passed 00:04:04.088 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:04:04.088 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:04.088 EAL: Restoring previous memory policy: 4 00:04:04.088 EAL: Calling mem event callback 'spdk:(nil)' 00:04:04.088 EAL: request: mp_malloc_sync 00:04:04.088 EAL: No shared files mode enabled, IPC is disabled 00:04:04.088 EAL: Heap on socket 0 was expanded by 4MB 00:04:04.088 EAL: Calling mem event callback 'spdk:(nil)' 00:04:04.088 EAL: request: mp_malloc_sync 00:04:04.088 EAL: No shared files mode enabled, IPC is disabled 00:04:04.088 EAL: Heap on socket 0 was shrunk by 4MB 00:04:04.088 EAL: Trying to obtain current memory policy. 00:04:04.088 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:04.088 EAL: Restoring previous memory policy: 4 00:04:04.088 EAL: Calling mem event callback 'spdk:(nil)' 00:04:04.088 EAL: request: mp_malloc_sync 00:04:04.088 EAL: No shared files mode enabled, IPC is disabled 00:04:04.088 EAL: Heap on socket 0 was expanded by 6MB 00:04:04.088 EAL: Calling mem event callback 'spdk:(nil)' 00:04:04.088 EAL: request: mp_malloc_sync 00:04:04.088 EAL: No shared files mode enabled, IPC is disabled 00:04:04.088 EAL: Heap on socket 0 was shrunk by 6MB 00:04:04.088 EAL: Trying to obtain current memory policy. 00:04:04.088 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:04.088 EAL: Restoring previous memory policy: 4 00:04:04.088 EAL: Calling mem event callback 'spdk:(nil)' 00:04:04.088 EAL: request: mp_malloc_sync 00:04:04.088 EAL: No shared files mode enabled, IPC is disabled 00:04:04.088 EAL: Heap on socket 0 was expanded by 10MB 00:04:04.088 EAL: Calling mem event callback 'spdk:(nil)' 00:04:04.088 EAL: request: mp_malloc_sync 00:04:04.088 EAL: No shared files mode enabled, IPC is disabled 00:04:04.088 EAL: Heap on socket 0 was shrunk by 10MB 00:04:04.088 EAL: Trying to obtain current memory policy. 00:04:04.088 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:04.088 EAL: Restoring previous memory policy: 4 00:04:04.088 EAL: Calling mem event callback 'spdk:(nil)' 00:04:04.088 EAL: request: mp_malloc_sync 00:04:04.088 EAL: No shared files mode enabled, IPC is disabled 00:04:04.088 EAL: Heap on socket 0 was expanded by 18MB 00:04:04.088 EAL: Calling mem event callback 'spdk:(nil)' 00:04:04.088 EAL: request: mp_malloc_sync 00:04:04.088 EAL: No shared files mode enabled, IPC is disabled 00:04:04.088 EAL: Heap on socket 0 was shrunk by 18MB 00:04:04.349 EAL: Trying to obtain current memory policy. 00:04:04.349 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:04.349 EAL: Restoring previous memory policy: 4 00:04:04.349 EAL: Calling mem event callback 'spdk:(nil)' 00:04:04.349 EAL: request: mp_malloc_sync 00:04:04.349 EAL: No shared files mode enabled, IPC is disabled 00:04:04.349 EAL: Heap on socket 0 was expanded by 34MB 00:04:04.349 EAL: Calling mem event callback 'spdk:(nil)' 00:04:04.349 EAL: request: mp_malloc_sync 00:04:04.349 EAL: No shared files mode enabled, IPC is disabled 00:04:04.349 EAL: Heap on socket 0 was shrunk by 34MB 00:04:04.349 EAL: Trying to obtain current memory policy. 00:04:04.349 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:04.349 EAL: Restoring previous memory policy: 4 00:04:04.349 EAL: Calling mem event callback 'spdk:(nil)' 00:04:04.349 EAL: request: mp_malloc_sync 00:04:04.349 EAL: No shared files mode enabled, IPC is disabled 00:04:04.349 EAL: Heap on socket 0 was expanded by 66MB 00:04:04.349 EAL: Calling mem event callback 'spdk:(nil)' 00:04:04.349 EAL: request: mp_malloc_sync 00:04:04.349 EAL: No shared files mode enabled, IPC is disabled 00:04:04.349 EAL: Heap on socket 0 was shrunk by 66MB 00:04:04.349 EAL: Trying to obtain current memory policy. 00:04:04.349 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:04.349 EAL: Restoring previous memory policy: 4 00:04:04.349 EAL: Calling mem event callback 'spdk:(nil)' 00:04:04.349 EAL: request: mp_malloc_sync 00:04:04.349 EAL: No shared files mode enabled, IPC is disabled 00:04:04.349 EAL: Heap on socket 0 was expanded by 130MB 00:04:04.349 EAL: Calling mem event callback 'spdk:(nil)' 00:04:04.349 EAL: request: mp_malloc_sync 00:04:04.349 EAL: No shared files mode enabled, IPC is disabled 00:04:04.349 EAL: Heap on socket 0 was shrunk by 130MB 00:04:04.610 EAL: Trying to obtain current memory policy. 00:04:04.610 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:04.610 EAL: Restoring previous memory policy: 4 00:04:04.610 EAL: Calling mem event callback 'spdk:(nil)' 00:04:04.610 EAL: request: mp_malloc_sync 00:04:04.610 EAL: No shared files mode enabled, IPC is disabled 00:04:04.610 EAL: Heap on socket 0 was expanded by 258MB 00:04:04.610 EAL: Calling mem event callback 'spdk:(nil)' 00:04:04.610 EAL: request: mp_malloc_sync 00:04:04.610 EAL: No shared files mode enabled, IPC is disabled 00:04:04.610 EAL: Heap on socket 0 was shrunk by 258MB 00:04:04.869 EAL: Trying to obtain current memory policy. 00:04:04.869 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:04.869 EAL: Restoring previous memory policy: 4 00:04:04.869 EAL: Calling mem event callback 'spdk:(nil)' 00:04:04.869 EAL: request: mp_malloc_sync 00:04:04.869 EAL: No shared files mode enabled, IPC is disabled 00:04:04.869 EAL: Heap on socket 0 was expanded by 514MB 00:04:05.128 EAL: Calling mem event callback 'spdk:(nil)' 00:04:05.128 EAL: request: mp_malloc_sync 00:04:05.128 EAL: No shared files mode enabled, IPC is disabled 00:04:05.128 EAL: Heap on socket 0 was shrunk by 514MB 00:04:05.389 EAL: Trying to obtain current memory policy. 00:04:05.389 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:05.649 EAL: Restoring previous memory policy: 4 00:04:05.649 EAL: Calling mem event callback 'spdk:(nil)' 00:04:05.649 EAL: request: mp_malloc_sync 00:04:05.649 EAL: No shared files mode enabled, IPC is disabled 00:04:05.649 EAL: Heap on socket 0 was expanded by 1026MB 00:04:06.221 EAL: Calling mem event callback 'spdk:(nil)' 00:04:06.221 EAL: request: mp_malloc_sync 00:04:06.221 EAL: No shared files mode enabled, IPC is disabled 00:04:06.221 EAL: Heap on socket 0 was shrunk by 1026MB 00:04:06.833 passed 00:04:06.833 00:04:06.833 Run Summary: Type Total Ran Passed Failed Inactive 00:04:06.833 suites 1 1 n/a 0 0 00:04:06.833 tests 2 2 2 0 0 00:04:06.833 asserts 497 497 497 0 n/a 00:04:06.833 00:04:06.833 Elapsed time = 2.873 seconds 00:04:06.833 EAL: Calling mem event callback 'spdk:(nil)' 00:04:06.833 EAL: request: mp_malloc_sync 00:04:06.833 EAL: No shared files mode enabled, IPC is disabled 00:04:06.833 EAL: Heap on socket 0 was shrunk by 2MB 00:04:06.833 EAL: No shared files mode enabled, IPC is disabled 00:04:06.833 EAL: No shared files mode enabled, IPC is disabled 00:04:06.833 EAL: No shared files mode enabled, IPC is disabled 00:04:06.833 00:04:06.833 real 0m3.108s 00:04:06.833 user 0m2.451s 00:04:06.833 sys 0m0.619s 00:04:06.833 04:01:21 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:06.833 04:01:21 -- common/autotest_common.sh@10 -- # set +x 00:04:06.833 ************************************ 00:04:06.833 END TEST env_vtophys 00:04:06.833 ************************************ 00:04:06.833 04:01:21 -- env/env.sh@12 -- # run_test env_pci /var/jenkins/workspace/dsa-phy-autotest/spdk/test/env/pci/pci_ut 00:04:06.833 04:01:21 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:06.833 04:01:21 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:06.833 04:01:21 -- common/autotest_common.sh@10 -- # set +x 00:04:06.833 ************************************ 00:04:06.833 START TEST env_pci 00:04:06.833 ************************************ 00:04:06.833 04:01:21 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/env/pci/pci_ut 00:04:06.833 00:04:06.833 00:04:06.833 CUnit - A unit testing framework for C - Version 2.1-3 00:04:06.833 http://cunit.sourceforge.net/ 00:04:06.833 00:04:06.833 00:04:06.833 Suite: pci 00:04:06.833 Test: pci_hook ...[2024-05-14 04:01:21.402469] /var/jenkins/workspace/dsa-phy-autotest/spdk/lib/env_dpdk/pci.c:1040:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 3787476 has claimed it 00:04:07.095 EAL: Cannot find device (10000:00:01.0) 00:04:07.095 EAL: Failed to attach device on primary process 00:04:07.095 passed 00:04:07.095 00:04:07.095 Run Summary: Type Total Ran Passed Failed Inactive 00:04:07.095 suites 1 1 n/a 0 0 00:04:07.095 tests 1 1 1 0 0 00:04:07.095 asserts 25 25 25 0 n/a 00:04:07.095 00:04:07.095 Elapsed time = 0.052 seconds 00:04:07.095 00:04:07.095 real 0m0.103s 00:04:07.095 user 0m0.035s 00:04:07.095 sys 0m0.068s 00:04:07.095 04:01:21 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:07.095 04:01:21 -- common/autotest_common.sh@10 -- # set +x 00:04:07.095 ************************************ 00:04:07.095 END TEST env_pci 00:04:07.095 ************************************ 00:04:07.095 04:01:21 -- env/env.sh@14 -- # argv='-c 0x1 ' 00:04:07.095 04:01:21 -- env/env.sh@15 -- # uname 00:04:07.095 04:01:21 -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:04:07.095 04:01:21 -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:04:07.095 04:01:21 -- env/env.sh@24 -- # run_test env_dpdk_post_init /var/jenkins/workspace/dsa-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:07.095 04:01:21 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:04:07.095 04:01:21 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:07.095 04:01:21 -- common/autotest_common.sh@10 -- # set +x 00:04:07.095 ************************************ 00:04:07.095 START TEST env_dpdk_post_init 00:04:07.095 ************************************ 00:04:07.095 04:01:21 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:07.095 EAL: Detected CPU lcores: 128 00:04:07.095 EAL: Detected NUMA nodes: 2 00:04:07.095 EAL: Detected shared linkage of DPDK 00:04:07.095 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:07.095 EAL: Selected IOVA mode 'VA' 00:04:07.095 EAL: No free 2048 kB hugepages reported on node 1 00:04:07.095 EAL: VFIO support initialized 00:04:07.095 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:07.355 EAL: Using IOMMU type 1 (Type 1) 00:04:07.355 EAL: Ignore mapping IO port bar(1) 00:04:07.355 EAL: Ignore mapping IO port bar(3) 00:04:07.614 EAL: Probe PCI driver: spdk_idxd (8086:0b25) device: 0000:6a:01.0 (socket 0) 00:04:07.614 EAL: Ignore mapping IO port bar(1) 00:04:07.614 EAL: Ignore mapping IO port bar(3) 00:04:07.614 EAL: Probe PCI driver: spdk_idxd (8086:0cfe) device: 0000:6a:02.0 (socket 0) 00:04:07.875 EAL: Ignore mapping IO port bar(1) 00:04:07.875 EAL: Ignore mapping IO port bar(3) 00:04:07.875 EAL: Probe PCI driver: spdk_idxd (8086:0b25) device: 0000:6f:01.0 (socket 0) 00:04:08.136 EAL: Ignore mapping IO port bar(1) 00:04:08.136 EAL: Ignore mapping IO port bar(3) 00:04:08.136 EAL: Probe PCI driver: spdk_idxd (8086:0cfe) device: 0000:6f:02.0 (socket 0) 00:04:08.398 EAL: Ignore mapping IO port bar(1) 00:04:08.398 EAL: Ignore mapping IO port bar(3) 00:04:08.398 EAL: Probe PCI driver: spdk_idxd (8086:0b25) device: 0000:74:01.0 (socket 0) 00:04:08.398 EAL: Ignore mapping IO port bar(1) 00:04:08.398 EAL: Ignore mapping IO port bar(3) 00:04:08.659 EAL: Probe PCI driver: spdk_idxd (8086:0cfe) device: 0000:74:02.0 (socket 0) 00:04:08.659 EAL: Ignore mapping IO port bar(1) 00:04:08.659 EAL: Ignore mapping IO port bar(3) 00:04:08.920 EAL: Probe PCI driver: spdk_idxd (8086:0b25) device: 0000:79:01.0 (socket 0) 00:04:08.920 EAL: Ignore mapping IO port bar(1) 00:04:08.920 EAL: Ignore mapping IO port bar(3) 00:04:09.186 EAL: Probe PCI driver: spdk_idxd (8086:0cfe) device: 0000:79:02.0 (socket 0) 00:04:09.757 EAL: Probe PCI driver: spdk_nvme (8086:0a54) device: 0000:c9:00.0 (socket 1) 00:04:10.696 EAL: Probe PCI driver: spdk_nvme (8086:0a54) device: 0000:ca:00.0 (socket 1) 00:04:10.696 EAL: Ignore mapping IO port bar(1) 00:04:10.696 EAL: Ignore mapping IO port bar(3) 00:04:10.696 EAL: Probe PCI driver: spdk_idxd (8086:0b25) device: 0000:e7:01.0 (socket 1) 00:04:10.957 EAL: Ignore mapping IO port bar(1) 00:04:10.957 EAL: Ignore mapping IO port bar(3) 00:04:10.957 EAL: Probe PCI driver: spdk_idxd (8086:0cfe) device: 0000:e7:02.0 (socket 1) 00:04:11.217 EAL: Ignore mapping IO port bar(1) 00:04:11.217 EAL: Ignore mapping IO port bar(3) 00:04:11.217 EAL: Probe PCI driver: spdk_idxd (8086:0b25) device: 0000:ec:01.0 (socket 1) 00:04:11.217 EAL: Ignore mapping IO port bar(1) 00:04:11.217 EAL: Ignore mapping IO port bar(3) 00:04:11.478 EAL: Probe PCI driver: spdk_idxd (8086:0cfe) device: 0000:ec:02.0 (socket 1) 00:04:11.478 EAL: Ignore mapping IO port bar(1) 00:04:11.478 EAL: Ignore mapping IO port bar(3) 00:04:11.739 EAL: Probe PCI driver: spdk_idxd (8086:0b25) device: 0000:f1:01.0 (socket 1) 00:04:11.739 EAL: Ignore mapping IO port bar(1) 00:04:11.739 EAL: Ignore mapping IO port bar(3) 00:04:11.999 EAL: Probe PCI driver: spdk_idxd (8086:0cfe) device: 0000:f1:02.0 (socket 1) 00:04:11.999 EAL: Ignore mapping IO port bar(1) 00:04:11.999 EAL: Ignore mapping IO port bar(3) 00:04:11.999 EAL: Probe PCI driver: spdk_idxd (8086:0b25) device: 0000:f6:01.0 (socket 1) 00:04:12.258 EAL: Ignore mapping IO port bar(1) 00:04:12.258 EAL: Ignore mapping IO port bar(3) 00:04:12.258 EAL: Probe PCI driver: spdk_idxd (8086:0cfe) device: 0000:f6:02.0 (socket 1) 00:04:16.459 EAL: Releasing PCI mapped resource for 0000:ca:00.0 00:04:16.459 EAL: Calling pci_unmap_resource for 0000:ca:00.0 at 0x202001184000 00:04:16.459 EAL: Releasing PCI mapped resource for 0000:c9:00.0 00:04:16.459 EAL: Calling pci_unmap_resource for 0000:c9:00.0 at 0x202001180000 00:04:17.030 Starting DPDK initialization... 00:04:17.030 Starting SPDK post initialization... 00:04:17.030 SPDK NVMe probe 00:04:17.030 Attaching to 0000:c9:00.0 00:04:17.030 Attaching to 0000:ca:00.0 00:04:17.030 Attached to 0000:c9:00.0 00:04:17.030 Attached to 0000:ca:00.0 00:04:17.030 Cleaning up... 00:04:19.024 00:04:19.024 real 0m11.563s 00:04:19.024 user 0m4.646s 00:04:19.024 sys 0m0.201s 00:04:19.024 04:01:33 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:19.024 04:01:33 -- common/autotest_common.sh@10 -- # set +x 00:04:19.024 ************************************ 00:04:19.024 END TEST env_dpdk_post_init 00:04:19.024 ************************************ 00:04:19.024 04:01:33 -- env/env.sh@26 -- # uname 00:04:19.024 04:01:33 -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:04:19.024 04:01:33 -- env/env.sh@29 -- # run_test env_mem_callbacks /var/jenkins/workspace/dsa-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:04:19.024 04:01:33 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:19.024 04:01:33 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:19.024 04:01:33 -- common/autotest_common.sh@10 -- # set +x 00:04:19.024 ************************************ 00:04:19.024 START TEST env_mem_callbacks 00:04:19.024 ************************************ 00:04:19.024 04:01:33 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:04:19.024 EAL: Detected CPU lcores: 128 00:04:19.024 EAL: Detected NUMA nodes: 2 00:04:19.024 EAL: Detected shared linkage of DPDK 00:04:19.024 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:19.024 EAL: Selected IOVA mode 'VA' 00:04:19.024 EAL: No free 2048 kB hugepages reported on node 1 00:04:19.024 EAL: VFIO support initialized 00:04:19.024 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:19.024 00:04:19.024 00:04:19.024 CUnit - A unit testing framework for C - Version 2.1-3 00:04:19.024 http://cunit.sourceforge.net/ 00:04:19.024 00:04:19.024 00:04:19.024 Suite: memory 00:04:19.024 Test: test ... 00:04:19.024 register 0x200000200000 2097152 00:04:19.024 malloc 3145728 00:04:19.024 register 0x200000400000 4194304 00:04:19.024 buf 0x2000004fffc0 len 3145728 PASSED 00:04:19.024 malloc 64 00:04:19.024 buf 0x2000004ffec0 len 64 PASSED 00:04:19.024 malloc 4194304 00:04:19.024 register 0x200000800000 6291456 00:04:19.024 buf 0x2000009fffc0 len 4194304 PASSED 00:04:19.024 free 0x2000004fffc0 3145728 00:04:19.024 free 0x2000004ffec0 64 00:04:19.024 unregister 0x200000400000 4194304 PASSED 00:04:19.024 free 0x2000009fffc0 4194304 00:04:19.024 unregister 0x200000800000 6291456 PASSED 00:04:19.024 malloc 8388608 00:04:19.024 register 0x200000400000 10485760 00:04:19.024 buf 0x2000005fffc0 len 8388608 PASSED 00:04:19.024 free 0x2000005fffc0 8388608 00:04:19.024 unregister 0x200000400000 10485760 PASSED 00:04:19.024 passed 00:04:19.024 00:04:19.024 Run Summary: Type Total Ran Passed Failed Inactive 00:04:19.024 suites 1 1 n/a 0 0 00:04:19.024 tests 1 1 1 0 0 00:04:19.024 asserts 15 15 15 0 n/a 00:04:19.024 00:04:19.024 Elapsed time = 0.021 seconds 00:04:19.024 00:04:19.024 real 0m0.155s 00:04:19.024 user 0m0.051s 00:04:19.024 sys 0m0.105s 00:04:19.024 04:01:33 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:19.024 04:01:33 -- common/autotest_common.sh@10 -- # set +x 00:04:19.024 ************************************ 00:04:19.024 END TEST env_mem_callbacks 00:04:19.024 ************************************ 00:04:19.024 00:04:19.024 real 0m15.542s 00:04:19.024 user 0m7.573s 00:04:19.024 sys 0m1.254s 00:04:19.024 04:01:33 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:19.024 04:01:33 -- common/autotest_common.sh@10 -- # set +x 00:04:19.024 ************************************ 00:04:19.024 END TEST env 00:04:19.024 ************************************ 00:04:19.024 04:01:33 -- spdk/autotest.sh@176 -- # run_test rpc /var/jenkins/workspace/dsa-phy-autotest/spdk/test/rpc/rpc.sh 00:04:19.024 04:01:33 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:19.024 04:01:33 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:19.024 04:01:33 -- common/autotest_common.sh@10 -- # set +x 00:04:19.024 ************************************ 00:04:19.024 START TEST rpc 00:04:19.024 ************************************ 00:04:19.024 04:01:33 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/rpc/rpc.sh 00:04:19.024 * Looking for test storage... 00:04:19.024 * Found test storage at /var/jenkins/workspace/dsa-phy-autotest/spdk/test/rpc 00:04:19.024 04:01:33 -- rpc/rpc.sh@65 -- # spdk_pid=3790002 00:04:19.024 04:01:33 -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:19.024 04:01:33 -- rpc/rpc.sh@67 -- # waitforlisten 3790002 00:04:19.024 04:01:33 -- common/autotest_common.sh@819 -- # '[' -z 3790002 ']' 00:04:19.024 04:01:33 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:19.024 04:01:33 -- common/autotest_common.sh@824 -- # local max_retries=100 00:04:19.024 04:01:33 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:19.024 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:19.024 04:01:33 -- common/autotest_common.sh@828 -- # xtrace_disable 00:04:19.024 04:01:33 -- rpc/rpc.sh@64 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/spdk_tgt -e bdev 00:04:19.024 04:01:33 -- common/autotest_common.sh@10 -- # set +x 00:04:19.024 [2024-05-14 04:01:33.536149] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:04:19.024 [2024-05-14 04:01:33.536288] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3790002 ] 00:04:19.024 EAL: No free 2048 kB hugepages reported on node 1 00:04:19.284 [2024-05-14 04:01:33.648547] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:19.284 [2024-05-14 04:01:33.738346] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:04:19.284 [2024-05-14 04:01:33.738525] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:04:19.284 [2024-05-14 04:01:33.738539] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 3790002' to capture a snapshot of events at runtime. 00:04:19.284 [2024-05-14 04:01:33.738549] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid3790002 for offline analysis/debug. 00:04:19.284 [2024-05-14 04:01:33.738573] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:19.851 04:01:34 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:04:19.851 04:01:34 -- common/autotest_common.sh@852 -- # return 0 00:04:19.851 04:01:34 -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/var/jenkins/workspace/dsa-phy-autotest/spdk/python:/var/jenkins/workspace/dsa-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/dsa-phy-autotest/spdk/python:/var/jenkins/workspace/dsa-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/dsa-phy-autotest/spdk/test/rpc 00:04:19.851 04:01:34 -- rpc/rpc.sh@69 -- # PYTHONPATH=:/var/jenkins/workspace/dsa-phy-autotest/spdk/python:/var/jenkins/workspace/dsa-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/dsa-phy-autotest/spdk/python:/var/jenkins/workspace/dsa-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/dsa-phy-autotest/spdk/test/rpc 00:04:19.851 04:01:34 -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:04:19.851 04:01:34 -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:04:19.851 04:01:34 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:19.851 04:01:34 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:19.851 04:01:34 -- common/autotest_common.sh@10 -- # set +x 00:04:19.851 ************************************ 00:04:19.851 START TEST rpc_integrity 00:04:19.851 ************************************ 00:04:19.851 04:01:34 -- common/autotest_common.sh@1104 -- # rpc_integrity 00:04:19.851 04:01:34 -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:19.851 04:01:34 -- common/autotest_common.sh@551 -- # xtrace_disable 00:04:19.851 04:01:34 -- common/autotest_common.sh@10 -- # set +x 00:04:19.851 04:01:34 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:04:19.851 04:01:34 -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:19.851 04:01:34 -- rpc/rpc.sh@13 -- # jq length 00:04:19.851 04:01:34 -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:19.851 04:01:34 -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:19.851 04:01:34 -- common/autotest_common.sh@551 -- # xtrace_disable 00:04:19.851 04:01:34 -- common/autotest_common.sh@10 -- # set +x 00:04:19.851 04:01:34 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:04:19.851 04:01:34 -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:04:19.851 04:01:34 -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:19.851 04:01:34 -- common/autotest_common.sh@551 -- # xtrace_disable 00:04:19.851 04:01:34 -- common/autotest_common.sh@10 -- # set +x 00:04:19.851 04:01:34 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:04:19.851 04:01:34 -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:19.851 { 00:04:19.851 "name": "Malloc0", 00:04:19.851 "aliases": [ 00:04:19.851 "d29d8767-2763-478b-a520-25cb4baffc43" 00:04:19.851 ], 00:04:19.851 "product_name": "Malloc disk", 00:04:19.851 "block_size": 512, 00:04:19.851 "num_blocks": 16384, 00:04:19.851 "uuid": "d29d8767-2763-478b-a520-25cb4baffc43", 00:04:19.851 "assigned_rate_limits": { 00:04:19.851 "rw_ios_per_sec": 0, 00:04:19.851 "rw_mbytes_per_sec": 0, 00:04:19.851 "r_mbytes_per_sec": 0, 00:04:19.851 "w_mbytes_per_sec": 0 00:04:19.851 }, 00:04:19.851 "claimed": false, 00:04:19.851 "zoned": false, 00:04:19.851 "supported_io_types": { 00:04:19.851 "read": true, 00:04:19.851 "write": true, 00:04:19.851 "unmap": true, 00:04:19.851 "write_zeroes": true, 00:04:19.851 "flush": true, 00:04:19.851 "reset": true, 00:04:19.851 "compare": false, 00:04:19.851 "compare_and_write": false, 00:04:19.851 "abort": true, 00:04:19.851 "nvme_admin": false, 00:04:19.851 "nvme_io": false 00:04:19.851 }, 00:04:19.851 "memory_domains": [ 00:04:19.851 { 00:04:19.851 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:19.851 "dma_device_type": 2 00:04:19.851 } 00:04:19.851 ], 00:04:19.851 "driver_specific": {} 00:04:19.851 } 00:04:19.851 ]' 00:04:19.851 04:01:34 -- rpc/rpc.sh@17 -- # jq length 00:04:19.851 04:01:34 -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:19.851 04:01:34 -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:04:19.851 04:01:34 -- common/autotest_common.sh@551 -- # xtrace_disable 00:04:19.851 04:01:34 -- common/autotest_common.sh@10 -- # set +x 00:04:19.851 [2024-05-14 04:01:34.371982] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:04:19.851 [2024-05-14 04:01:34.372030] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:19.851 [2024-05-14 04:01:34.372058] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600001fb80 00:04:19.851 [2024-05-14 04:01:34.372067] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:19.851 [2024-05-14 04:01:34.373745] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:19.851 [2024-05-14 04:01:34.373771] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:19.851 Passthru0 00:04:19.851 04:01:34 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:04:19.851 04:01:34 -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:19.851 04:01:34 -- common/autotest_common.sh@551 -- # xtrace_disable 00:04:19.851 04:01:34 -- common/autotest_common.sh@10 -- # set +x 00:04:19.851 04:01:34 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:04:19.851 04:01:34 -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:19.851 { 00:04:19.852 "name": "Malloc0", 00:04:19.852 "aliases": [ 00:04:19.852 "d29d8767-2763-478b-a520-25cb4baffc43" 00:04:19.852 ], 00:04:19.852 "product_name": "Malloc disk", 00:04:19.852 "block_size": 512, 00:04:19.852 "num_blocks": 16384, 00:04:19.852 "uuid": "d29d8767-2763-478b-a520-25cb4baffc43", 00:04:19.852 "assigned_rate_limits": { 00:04:19.852 "rw_ios_per_sec": 0, 00:04:19.852 "rw_mbytes_per_sec": 0, 00:04:19.852 "r_mbytes_per_sec": 0, 00:04:19.852 "w_mbytes_per_sec": 0 00:04:19.852 }, 00:04:19.852 "claimed": true, 00:04:19.852 "claim_type": "exclusive_write", 00:04:19.852 "zoned": false, 00:04:19.852 "supported_io_types": { 00:04:19.852 "read": true, 00:04:19.852 "write": true, 00:04:19.852 "unmap": true, 00:04:19.852 "write_zeroes": true, 00:04:19.852 "flush": true, 00:04:19.852 "reset": true, 00:04:19.852 "compare": false, 00:04:19.852 "compare_and_write": false, 00:04:19.852 "abort": true, 00:04:19.852 "nvme_admin": false, 00:04:19.852 "nvme_io": false 00:04:19.852 }, 00:04:19.852 "memory_domains": [ 00:04:19.852 { 00:04:19.852 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:19.852 "dma_device_type": 2 00:04:19.852 } 00:04:19.852 ], 00:04:19.852 "driver_specific": {} 00:04:19.852 }, 00:04:19.852 { 00:04:19.852 "name": "Passthru0", 00:04:19.852 "aliases": [ 00:04:19.852 "024d0183-a84a-5b89-9d81-00f21d89f228" 00:04:19.852 ], 00:04:19.852 "product_name": "passthru", 00:04:19.852 "block_size": 512, 00:04:19.852 "num_blocks": 16384, 00:04:19.852 "uuid": "024d0183-a84a-5b89-9d81-00f21d89f228", 00:04:19.852 "assigned_rate_limits": { 00:04:19.852 "rw_ios_per_sec": 0, 00:04:19.852 "rw_mbytes_per_sec": 0, 00:04:19.852 "r_mbytes_per_sec": 0, 00:04:19.852 "w_mbytes_per_sec": 0 00:04:19.852 }, 00:04:19.852 "claimed": false, 00:04:19.852 "zoned": false, 00:04:19.852 "supported_io_types": { 00:04:19.852 "read": true, 00:04:19.852 "write": true, 00:04:19.852 "unmap": true, 00:04:19.852 "write_zeroes": true, 00:04:19.852 "flush": true, 00:04:19.852 "reset": true, 00:04:19.852 "compare": false, 00:04:19.852 "compare_and_write": false, 00:04:19.852 "abort": true, 00:04:19.852 "nvme_admin": false, 00:04:19.852 "nvme_io": false 00:04:19.852 }, 00:04:19.852 "memory_domains": [ 00:04:19.852 { 00:04:19.852 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:19.852 "dma_device_type": 2 00:04:19.852 } 00:04:19.852 ], 00:04:19.852 "driver_specific": { 00:04:19.852 "passthru": { 00:04:19.852 "name": "Passthru0", 00:04:19.852 "base_bdev_name": "Malloc0" 00:04:19.852 } 00:04:19.852 } 00:04:19.852 } 00:04:19.852 ]' 00:04:19.852 04:01:34 -- rpc/rpc.sh@21 -- # jq length 00:04:19.852 04:01:34 -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:19.852 04:01:34 -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:19.852 04:01:34 -- common/autotest_common.sh@551 -- # xtrace_disable 00:04:19.852 04:01:34 -- common/autotest_common.sh@10 -- # set +x 00:04:19.852 04:01:34 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:04:19.852 04:01:34 -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:04:19.852 04:01:34 -- common/autotest_common.sh@551 -- # xtrace_disable 00:04:19.852 04:01:34 -- common/autotest_common.sh@10 -- # set +x 00:04:20.111 04:01:34 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:04:20.111 04:01:34 -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:20.111 04:01:34 -- common/autotest_common.sh@551 -- # xtrace_disable 00:04:20.111 04:01:34 -- common/autotest_common.sh@10 -- # set +x 00:04:20.111 04:01:34 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:04:20.111 04:01:34 -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:20.111 04:01:34 -- rpc/rpc.sh@26 -- # jq length 00:04:20.111 04:01:34 -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:20.111 00:04:20.111 real 0m0.232s 00:04:20.111 user 0m0.137s 00:04:20.111 sys 0m0.028s 00:04:20.111 04:01:34 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:20.111 04:01:34 -- common/autotest_common.sh@10 -- # set +x 00:04:20.111 ************************************ 00:04:20.111 END TEST rpc_integrity 00:04:20.111 ************************************ 00:04:20.111 04:01:34 -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:04:20.111 04:01:34 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:20.111 04:01:34 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:20.111 04:01:34 -- common/autotest_common.sh@10 -- # set +x 00:04:20.111 ************************************ 00:04:20.111 START TEST rpc_plugins 00:04:20.111 ************************************ 00:04:20.111 04:01:34 -- common/autotest_common.sh@1104 -- # rpc_plugins 00:04:20.111 04:01:34 -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:04:20.111 04:01:34 -- common/autotest_common.sh@551 -- # xtrace_disable 00:04:20.111 04:01:34 -- common/autotest_common.sh@10 -- # set +x 00:04:20.111 04:01:34 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:04:20.111 04:01:34 -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:04:20.111 04:01:34 -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:04:20.111 04:01:34 -- common/autotest_common.sh@551 -- # xtrace_disable 00:04:20.111 04:01:34 -- common/autotest_common.sh@10 -- # set +x 00:04:20.111 04:01:34 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:04:20.111 04:01:34 -- rpc/rpc.sh@31 -- # bdevs='[ 00:04:20.111 { 00:04:20.111 "name": "Malloc1", 00:04:20.111 "aliases": [ 00:04:20.111 "252fa8f0-9cb9-4efa-aa82-848162b98b71" 00:04:20.111 ], 00:04:20.111 "product_name": "Malloc disk", 00:04:20.111 "block_size": 4096, 00:04:20.111 "num_blocks": 256, 00:04:20.111 "uuid": "252fa8f0-9cb9-4efa-aa82-848162b98b71", 00:04:20.111 "assigned_rate_limits": { 00:04:20.111 "rw_ios_per_sec": 0, 00:04:20.111 "rw_mbytes_per_sec": 0, 00:04:20.111 "r_mbytes_per_sec": 0, 00:04:20.111 "w_mbytes_per_sec": 0 00:04:20.111 }, 00:04:20.111 "claimed": false, 00:04:20.111 "zoned": false, 00:04:20.111 "supported_io_types": { 00:04:20.111 "read": true, 00:04:20.111 "write": true, 00:04:20.111 "unmap": true, 00:04:20.111 "write_zeroes": true, 00:04:20.111 "flush": true, 00:04:20.111 "reset": true, 00:04:20.111 "compare": false, 00:04:20.111 "compare_and_write": false, 00:04:20.111 "abort": true, 00:04:20.111 "nvme_admin": false, 00:04:20.111 "nvme_io": false 00:04:20.111 }, 00:04:20.111 "memory_domains": [ 00:04:20.111 { 00:04:20.111 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:20.111 "dma_device_type": 2 00:04:20.111 } 00:04:20.111 ], 00:04:20.111 "driver_specific": {} 00:04:20.111 } 00:04:20.111 ]' 00:04:20.111 04:01:34 -- rpc/rpc.sh@32 -- # jq length 00:04:20.111 04:01:34 -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:04:20.111 04:01:34 -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:04:20.111 04:01:34 -- common/autotest_common.sh@551 -- # xtrace_disable 00:04:20.111 04:01:34 -- common/autotest_common.sh@10 -- # set +x 00:04:20.111 04:01:34 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:04:20.111 04:01:34 -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:04:20.111 04:01:34 -- common/autotest_common.sh@551 -- # xtrace_disable 00:04:20.111 04:01:34 -- common/autotest_common.sh@10 -- # set +x 00:04:20.111 04:01:34 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:04:20.111 04:01:34 -- rpc/rpc.sh@35 -- # bdevs='[]' 00:04:20.112 04:01:34 -- rpc/rpc.sh@36 -- # jq length 00:04:20.112 04:01:34 -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:04:20.112 00:04:20.112 real 0m0.120s 00:04:20.112 user 0m0.067s 00:04:20.112 sys 0m0.016s 00:04:20.112 04:01:34 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:20.112 04:01:34 -- common/autotest_common.sh@10 -- # set +x 00:04:20.112 ************************************ 00:04:20.112 END TEST rpc_plugins 00:04:20.112 ************************************ 00:04:20.112 04:01:34 -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:04:20.112 04:01:34 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:20.112 04:01:34 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:20.112 04:01:34 -- common/autotest_common.sh@10 -- # set +x 00:04:20.112 ************************************ 00:04:20.112 START TEST rpc_trace_cmd_test 00:04:20.112 ************************************ 00:04:20.112 04:01:34 -- common/autotest_common.sh@1104 -- # rpc_trace_cmd_test 00:04:20.112 04:01:34 -- rpc/rpc.sh@40 -- # local info 00:04:20.112 04:01:34 -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:04:20.112 04:01:34 -- common/autotest_common.sh@551 -- # xtrace_disable 00:04:20.112 04:01:34 -- common/autotest_common.sh@10 -- # set +x 00:04:20.112 04:01:34 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:04:20.112 04:01:34 -- rpc/rpc.sh@42 -- # info='{ 00:04:20.112 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid3790002", 00:04:20.112 "tpoint_group_mask": "0x8", 00:04:20.112 "iscsi_conn": { 00:04:20.112 "mask": "0x2", 00:04:20.112 "tpoint_mask": "0x0" 00:04:20.112 }, 00:04:20.112 "scsi": { 00:04:20.112 "mask": "0x4", 00:04:20.112 "tpoint_mask": "0x0" 00:04:20.112 }, 00:04:20.112 "bdev": { 00:04:20.112 "mask": "0x8", 00:04:20.112 "tpoint_mask": "0xffffffffffffffff" 00:04:20.112 }, 00:04:20.112 "nvmf_rdma": { 00:04:20.112 "mask": "0x10", 00:04:20.112 "tpoint_mask": "0x0" 00:04:20.112 }, 00:04:20.112 "nvmf_tcp": { 00:04:20.112 "mask": "0x20", 00:04:20.112 "tpoint_mask": "0x0" 00:04:20.112 }, 00:04:20.112 "ftl": { 00:04:20.112 "mask": "0x40", 00:04:20.112 "tpoint_mask": "0x0" 00:04:20.112 }, 00:04:20.112 "blobfs": { 00:04:20.112 "mask": "0x80", 00:04:20.112 "tpoint_mask": "0x0" 00:04:20.112 }, 00:04:20.112 "dsa": { 00:04:20.112 "mask": "0x200", 00:04:20.112 "tpoint_mask": "0x0" 00:04:20.112 }, 00:04:20.112 "thread": { 00:04:20.112 "mask": "0x400", 00:04:20.112 "tpoint_mask": "0x0" 00:04:20.112 }, 00:04:20.112 "nvme_pcie": { 00:04:20.112 "mask": "0x800", 00:04:20.112 "tpoint_mask": "0x0" 00:04:20.112 }, 00:04:20.112 "iaa": { 00:04:20.112 "mask": "0x1000", 00:04:20.112 "tpoint_mask": "0x0" 00:04:20.112 }, 00:04:20.112 "nvme_tcp": { 00:04:20.112 "mask": "0x2000", 00:04:20.112 "tpoint_mask": "0x0" 00:04:20.112 }, 00:04:20.112 "bdev_nvme": { 00:04:20.112 "mask": "0x4000", 00:04:20.112 "tpoint_mask": "0x0" 00:04:20.112 } 00:04:20.112 }' 00:04:20.373 04:01:34 -- rpc/rpc.sh@43 -- # jq length 00:04:20.373 04:01:34 -- rpc/rpc.sh@43 -- # '[' 15 -gt 2 ']' 00:04:20.373 04:01:34 -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:04:20.373 04:01:34 -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:04:20.373 04:01:34 -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:04:20.373 04:01:34 -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:04:20.373 04:01:34 -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:04:20.373 04:01:34 -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:04:20.373 04:01:34 -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:04:20.373 04:01:34 -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:04:20.373 00:04:20.373 real 0m0.180s 00:04:20.373 user 0m0.152s 00:04:20.373 sys 0m0.022s 00:04:20.373 04:01:34 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:20.373 04:01:34 -- common/autotest_common.sh@10 -- # set +x 00:04:20.373 ************************************ 00:04:20.373 END TEST rpc_trace_cmd_test 00:04:20.373 ************************************ 00:04:20.373 04:01:34 -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:04:20.373 04:01:34 -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:04:20.373 04:01:34 -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:04:20.373 04:01:34 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:20.373 04:01:34 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:20.373 04:01:34 -- common/autotest_common.sh@10 -- # set +x 00:04:20.373 ************************************ 00:04:20.373 START TEST rpc_daemon_integrity 00:04:20.373 ************************************ 00:04:20.373 04:01:34 -- common/autotest_common.sh@1104 -- # rpc_integrity 00:04:20.373 04:01:34 -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:20.373 04:01:34 -- common/autotest_common.sh@551 -- # xtrace_disable 00:04:20.373 04:01:34 -- common/autotest_common.sh@10 -- # set +x 00:04:20.373 04:01:34 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:04:20.373 04:01:34 -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:20.373 04:01:34 -- rpc/rpc.sh@13 -- # jq length 00:04:20.373 04:01:34 -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:20.373 04:01:34 -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:20.373 04:01:34 -- common/autotest_common.sh@551 -- # xtrace_disable 00:04:20.373 04:01:34 -- common/autotest_common.sh@10 -- # set +x 00:04:20.373 04:01:34 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:04:20.373 04:01:34 -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:04:20.373 04:01:34 -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:20.373 04:01:34 -- common/autotest_common.sh@551 -- # xtrace_disable 00:04:20.373 04:01:34 -- common/autotest_common.sh@10 -- # set +x 00:04:20.635 04:01:34 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:04:20.635 04:01:34 -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:20.635 { 00:04:20.635 "name": "Malloc2", 00:04:20.635 "aliases": [ 00:04:20.635 "2cf1a83d-c75b-4ecb-867a-20ded08f1976" 00:04:20.635 ], 00:04:20.635 "product_name": "Malloc disk", 00:04:20.635 "block_size": 512, 00:04:20.635 "num_blocks": 16384, 00:04:20.635 "uuid": "2cf1a83d-c75b-4ecb-867a-20ded08f1976", 00:04:20.635 "assigned_rate_limits": { 00:04:20.635 "rw_ios_per_sec": 0, 00:04:20.635 "rw_mbytes_per_sec": 0, 00:04:20.635 "r_mbytes_per_sec": 0, 00:04:20.635 "w_mbytes_per_sec": 0 00:04:20.635 }, 00:04:20.635 "claimed": false, 00:04:20.635 "zoned": false, 00:04:20.635 "supported_io_types": { 00:04:20.635 "read": true, 00:04:20.635 "write": true, 00:04:20.635 "unmap": true, 00:04:20.635 "write_zeroes": true, 00:04:20.635 "flush": true, 00:04:20.635 "reset": true, 00:04:20.635 "compare": false, 00:04:20.635 "compare_and_write": false, 00:04:20.635 "abort": true, 00:04:20.635 "nvme_admin": false, 00:04:20.635 "nvme_io": false 00:04:20.635 }, 00:04:20.635 "memory_domains": [ 00:04:20.635 { 00:04:20.635 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:20.635 "dma_device_type": 2 00:04:20.635 } 00:04:20.635 ], 00:04:20.635 "driver_specific": {} 00:04:20.635 } 00:04:20.635 ]' 00:04:20.635 04:01:34 -- rpc/rpc.sh@17 -- # jq length 00:04:20.635 04:01:35 -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:20.635 04:01:35 -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:04:20.635 04:01:35 -- common/autotest_common.sh@551 -- # xtrace_disable 00:04:20.635 04:01:35 -- common/autotest_common.sh@10 -- # set +x 00:04:20.635 [2024-05-14 04:01:35.009563] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:04:20.635 [2024-05-14 04:01:35.009605] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:20.635 [2024-05-14 04:01:35.009627] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000020d80 00:04:20.635 [2024-05-14 04:01:35.009636] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:20.635 [2024-05-14 04:01:35.011259] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:20.635 [2024-05-14 04:01:35.011283] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:20.635 Passthru0 00:04:20.635 04:01:35 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:04:20.635 04:01:35 -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:20.635 04:01:35 -- common/autotest_common.sh@551 -- # xtrace_disable 00:04:20.635 04:01:35 -- common/autotest_common.sh@10 -- # set +x 00:04:20.635 04:01:35 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:04:20.635 04:01:35 -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:20.635 { 00:04:20.635 "name": "Malloc2", 00:04:20.635 "aliases": [ 00:04:20.635 "2cf1a83d-c75b-4ecb-867a-20ded08f1976" 00:04:20.635 ], 00:04:20.635 "product_name": "Malloc disk", 00:04:20.635 "block_size": 512, 00:04:20.635 "num_blocks": 16384, 00:04:20.635 "uuid": "2cf1a83d-c75b-4ecb-867a-20ded08f1976", 00:04:20.635 "assigned_rate_limits": { 00:04:20.635 "rw_ios_per_sec": 0, 00:04:20.635 "rw_mbytes_per_sec": 0, 00:04:20.636 "r_mbytes_per_sec": 0, 00:04:20.636 "w_mbytes_per_sec": 0 00:04:20.636 }, 00:04:20.636 "claimed": true, 00:04:20.636 "claim_type": "exclusive_write", 00:04:20.636 "zoned": false, 00:04:20.636 "supported_io_types": { 00:04:20.636 "read": true, 00:04:20.636 "write": true, 00:04:20.636 "unmap": true, 00:04:20.636 "write_zeroes": true, 00:04:20.636 "flush": true, 00:04:20.636 "reset": true, 00:04:20.636 "compare": false, 00:04:20.636 "compare_and_write": false, 00:04:20.636 "abort": true, 00:04:20.636 "nvme_admin": false, 00:04:20.636 "nvme_io": false 00:04:20.636 }, 00:04:20.636 "memory_domains": [ 00:04:20.636 { 00:04:20.636 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:20.636 "dma_device_type": 2 00:04:20.636 } 00:04:20.636 ], 00:04:20.636 "driver_specific": {} 00:04:20.636 }, 00:04:20.636 { 00:04:20.636 "name": "Passthru0", 00:04:20.636 "aliases": [ 00:04:20.636 "040b41d8-fb66-5b84-a1f2-74477deeeac2" 00:04:20.636 ], 00:04:20.636 "product_name": "passthru", 00:04:20.636 "block_size": 512, 00:04:20.636 "num_blocks": 16384, 00:04:20.636 "uuid": "040b41d8-fb66-5b84-a1f2-74477deeeac2", 00:04:20.636 "assigned_rate_limits": { 00:04:20.636 "rw_ios_per_sec": 0, 00:04:20.636 "rw_mbytes_per_sec": 0, 00:04:20.636 "r_mbytes_per_sec": 0, 00:04:20.636 "w_mbytes_per_sec": 0 00:04:20.636 }, 00:04:20.636 "claimed": false, 00:04:20.636 "zoned": false, 00:04:20.636 "supported_io_types": { 00:04:20.636 "read": true, 00:04:20.636 "write": true, 00:04:20.636 "unmap": true, 00:04:20.636 "write_zeroes": true, 00:04:20.636 "flush": true, 00:04:20.636 "reset": true, 00:04:20.636 "compare": false, 00:04:20.636 "compare_and_write": false, 00:04:20.636 "abort": true, 00:04:20.636 "nvme_admin": false, 00:04:20.636 "nvme_io": false 00:04:20.636 }, 00:04:20.636 "memory_domains": [ 00:04:20.636 { 00:04:20.636 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:20.636 "dma_device_type": 2 00:04:20.636 } 00:04:20.636 ], 00:04:20.636 "driver_specific": { 00:04:20.636 "passthru": { 00:04:20.636 "name": "Passthru0", 00:04:20.636 "base_bdev_name": "Malloc2" 00:04:20.636 } 00:04:20.636 } 00:04:20.636 } 00:04:20.636 ]' 00:04:20.636 04:01:35 -- rpc/rpc.sh@21 -- # jq length 00:04:20.636 04:01:35 -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:20.636 04:01:35 -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:20.636 04:01:35 -- common/autotest_common.sh@551 -- # xtrace_disable 00:04:20.636 04:01:35 -- common/autotest_common.sh@10 -- # set +x 00:04:20.636 04:01:35 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:04:20.636 04:01:35 -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:04:20.636 04:01:35 -- common/autotest_common.sh@551 -- # xtrace_disable 00:04:20.636 04:01:35 -- common/autotest_common.sh@10 -- # set +x 00:04:20.636 04:01:35 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:04:20.636 04:01:35 -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:20.636 04:01:35 -- common/autotest_common.sh@551 -- # xtrace_disable 00:04:20.636 04:01:35 -- common/autotest_common.sh@10 -- # set +x 00:04:20.636 04:01:35 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:04:20.636 04:01:35 -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:20.636 04:01:35 -- rpc/rpc.sh@26 -- # jq length 00:04:20.636 04:01:35 -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:20.636 00:04:20.636 real 0m0.230s 00:04:20.636 user 0m0.129s 00:04:20.636 sys 0m0.032s 00:04:20.636 04:01:35 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:20.636 04:01:35 -- common/autotest_common.sh@10 -- # set +x 00:04:20.636 ************************************ 00:04:20.636 END TEST rpc_daemon_integrity 00:04:20.636 ************************************ 00:04:20.636 04:01:35 -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:04:20.636 04:01:35 -- rpc/rpc.sh@84 -- # killprocess 3790002 00:04:20.636 04:01:35 -- common/autotest_common.sh@926 -- # '[' -z 3790002 ']' 00:04:20.636 04:01:35 -- common/autotest_common.sh@930 -- # kill -0 3790002 00:04:20.636 04:01:35 -- common/autotest_common.sh@931 -- # uname 00:04:20.636 04:01:35 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:04:20.636 04:01:35 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3790002 00:04:20.636 04:01:35 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:04:20.636 04:01:35 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:04:20.636 04:01:35 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3790002' 00:04:20.636 killing process with pid 3790002 00:04:20.636 04:01:35 -- common/autotest_common.sh@945 -- # kill 3790002 00:04:20.636 04:01:35 -- common/autotest_common.sh@950 -- # wait 3790002 00:04:21.577 00:04:21.577 real 0m2.713s 00:04:21.577 user 0m3.155s 00:04:21.577 sys 0m0.635s 00:04:21.577 04:01:36 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:21.577 04:01:36 -- common/autotest_common.sh@10 -- # set +x 00:04:21.577 ************************************ 00:04:21.577 END TEST rpc 00:04:21.577 ************************************ 00:04:21.577 04:01:36 -- spdk/autotest.sh@177 -- # run_test rpc_client /var/jenkins/workspace/dsa-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:04:21.577 04:01:36 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:21.577 04:01:36 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:21.577 04:01:36 -- common/autotest_common.sh@10 -- # set +x 00:04:21.577 ************************************ 00:04:21.577 START TEST rpc_client 00:04:21.577 ************************************ 00:04:21.577 04:01:36 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:04:21.836 * Looking for test storage... 00:04:21.836 * Found test storage at /var/jenkins/workspace/dsa-phy-autotest/spdk/test/rpc_client 00:04:21.836 04:01:36 -- rpc_client/rpc_client.sh@10 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/rpc_client/rpc_client_test 00:04:21.836 OK 00:04:21.836 04:01:36 -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:04:21.836 00:04:21.836 real 0m0.105s 00:04:21.836 user 0m0.048s 00:04:21.836 sys 0m0.061s 00:04:21.836 04:01:36 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:21.836 04:01:36 -- common/autotest_common.sh@10 -- # set +x 00:04:21.836 ************************************ 00:04:21.836 END TEST rpc_client 00:04:21.836 ************************************ 00:04:21.836 04:01:36 -- spdk/autotest.sh@178 -- # run_test json_config /var/jenkins/workspace/dsa-phy-autotest/spdk/test/json_config/json_config.sh 00:04:21.836 04:01:36 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:21.836 04:01:36 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:21.836 04:01:36 -- common/autotest_common.sh@10 -- # set +x 00:04:21.836 ************************************ 00:04:21.836 START TEST json_config 00:04:21.836 ************************************ 00:04:21.836 04:01:36 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/json_config/json_config.sh 00:04:21.836 04:01:36 -- json_config/json_config.sh@8 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/common.sh 00:04:21.836 04:01:36 -- nvmf/common.sh@7 -- # uname -s 00:04:21.836 04:01:36 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:21.836 04:01:36 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:21.836 04:01:36 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:21.836 04:01:36 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:21.837 04:01:36 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:21.837 04:01:36 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:21.837 04:01:36 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:21.837 04:01:36 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:21.837 04:01:36 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:21.837 04:01:36 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:21.837 04:01:36 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80ef6226-405e-ee11-906e-a4bf01973fda 00:04:21.837 04:01:36 -- nvmf/common.sh@18 -- # NVME_HOSTID=80ef6226-405e-ee11-906e-a4bf01973fda 00:04:21.837 04:01:36 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:21.837 04:01:36 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:21.837 04:01:36 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:21.837 04:01:36 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/common.sh 00:04:21.837 04:01:36 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:21.837 04:01:36 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:21.837 04:01:36 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:21.837 04:01:36 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:21.837 04:01:36 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:21.837 04:01:36 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:21.837 04:01:36 -- paths/export.sh@5 -- # export PATH 00:04:21.837 04:01:36 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:21.837 04:01:36 -- nvmf/common.sh@46 -- # : 0 00:04:21.837 04:01:36 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:04:21.837 04:01:36 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:04:21.837 04:01:36 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:04:21.837 04:01:36 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:21.837 04:01:36 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:21.837 04:01:36 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:04:21.837 04:01:36 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:04:21.837 04:01:36 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:04:21.837 04:01:36 -- json_config/json_config.sh@10 -- # [[ 0 -eq 1 ]] 00:04:21.837 04:01:36 -- json_config/json_config.sh@14 -- # [[ 0 -ne 1 ]] 00:04:21.837 04:01:36 -- json_config/json_config.sh@14 -- # [[ 0 -eq 1 ]] 00:04:21.837 04:01:36 -- json_config/json_config.sh@25 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:04:21.837 04:01:36 -- json_config/json_config.sh@30 -- # app_pid=(['target']='' ['initiator']='') 00:04:21.837 04:01:36 -- json_config/json_config.sh@30 -- # declare -A app_pid 00:04:21.837 04:01:36 -- json_config/json_config.sh@31 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:04:21.837 04:01:36 -- json_config/json_config.sh@31 -- # declare -A app_socket 00:04:21.837 04:01:36 -- json_config/json_config.sh@32 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:04:21.837 04:01:36 -- json_config/json_config.sh@32 -- # declare -A app_params 00:04:21.837 04:01:36 -- json_config/json_config.sh@33 -- # configs_path=(['target']='/var/jenkins/workspace/dsa-phy-autotest/spdk/spdk_tgt_config.json' ['initiator']='/var/jenkins/workspace/dsa-phy-autotest/spdk/spdk_initiator_config.json') 00:04:21.837 04:01:36 -- json_config/json_config.sh@33 -- # declare -A configs_path 00:04:21.837 04:01:36 -- json_config/json_config.sh@43 -- # last_event_id=0 00:04:21.837 04:01:36 -- json_config/json_config.sh@418 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:04:21.837 04:01:36 -- json_config/json_config.sh@419 -- # echo 'INFO: JSON configuration test init' 00:04:21.837 INFO: JSON configuration test init 00:04:21.837 04:01:36 -- json_config/json_config.sh@420 -- # json_config_test_init 00:04:21.837 04:01:36 -- json_config/json_config.sh@315 -- # timing_enter json_config_test_init 00:04:21.837 04:01:36 -- common/autotest_common.sh@712 -- # xtrace_disable 00:04:21.837 04:01:36 -- common/autotest_common.sh@10 -- # set +x 00:04:21.837 04:01:36 -- json_config/json_config.sh@316 -- # timing_enter json_config_setup_target 00:04:21.837 04:01:36 -- common/autotest_common.sh@712 -- # xtrace_disable 00:04:21.837 04:01:36 -- common/autotest_common.sh@10 -- # set +x 00:04:21.837 04:01:36 -- json_config/json_config.sh@318 -- # json_config_test_start_app target --wait-for-rpc 00:04:21.837 04:01:36 -- json_config/json_config.sh@98 -- # local app=target 00:04:21.837 04:01:36 -- json_config/json_config.sh@99 -- # shift 00:04:21.837 04:01:36 -- json_config/json_config.sh@101 -- # [[ -n 22 ]] 00:04:21.837 04:01:36 -- json_config/json_config.sh@102 -- # [[ -z '' ]] 00:04:21.837 04:01:36 -- json_config/json_config.sh@104 -- # local app_extra_params= 00:04:21.837 04:01:36 -- json_config/json_config.sh@105 -- # [[ 0 -eq 1 ]] 00:04:21.837 04:01:36 -- json_config/json_config.sh@105 -- # [[ 0 -eq 1 ]] 00:04:21.837 04:01:36 -- json_config/json_config.sh@111 -- # app_pid[$app]=3790816 00:04:21.837 04:01:36 -- json_config/json_config.sh@113 -- # echo 'Waiting for target to run...' 00:04:21.837 Waiting for target to run... 00:04:21.837 04:01:36 -- json_config/json_config.sh@114 -- # waitforlisten 3790816 /var/tmp/spdk_tgt.sock 00:04:21.837 04:01:36 -- common/autotest_common.sh@819 -- # '[' -z 3790816 ']' 00:04:21.837 04:01:36 -- json_config/json_config.sh@110 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:04:21.837 04:01:36 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:21.837 04:01:36 -- common/autotest_common.sh@824 -- # local max_retries=100 00:04:21.837 04:01:36 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:21.837 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:21.837 04:01:36 -- common/autotest_common.sh@828 -- # xtrace_disable 00:04:21.837 04:01:36 -- common/autotest_common.sh@10 -- # set +x 00:04:21.837 [2024-05-14 04:01:36.374976] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:04:21.837 [2024-05-14 04:01:36.375057] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3790816 ] 00:04:21.837 EAL: No free 2048 kB hugepages reported on node 1 00:04:22.098 [2024-05-14 04:01:36.625113] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:22.359 [2024-05-14 04:01:36.703404] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:04:22.359 [2024-05-14 04:01:36.703577] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:22.617 04:01:37 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:04:22.617 04:01:37 -- common/autotest_common.sh@852 -- # return 0 00:04:22.617 04:01:37 -- json_config/json_config.sh@115 -- # echo '' 00:04:22.617 00:04:22.617 04:01:37 -- json_config/json_config.sh@322 -- # create_accel_config 00:04:22.617 04:01:37 -- json_config/json_config.sh@146 -- # timing_enter create_accel_config 00:04:22.617 04:01:37 -- common/autotest_common.sh@712 -- # xtrace_disable 00:04:22.617 04:01:37 -- common/autotest_common.sh@10 -- # set +x 00:04:22.617 04:01:37 -- json_config/json_config.sh@148 -- # [[ 0 -eq 1 ]] 00:04:22.617 04:01:37 -- json_config/json_config.sh@154 -- # timing_exit create_accel_config 00:04:22.617 04:01:37 -- common/autotest_common.sh@718 -- # xtrace_disable 00:04:22.617 04:01:37 -- common/autotest_common.sh@10 -- # set +x 00:04:22.618 04:01:37 -- json_config/json_config.sh@326 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:04:22.618 04:01:37 -- json_config/json_config.sh@327 -- # tgt_rpc load_config 00:04:22.618 04:01:37 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:04:29.191 04:01:43 -- json_config/json_config.sh@329 -- # tgt_check_notification_types 00:04:29.191 04:01:43 -- json_config/json_config.sh@46 -- # timing_enter tgt_check_notification_types 00:04:29.191 04:01:43 -- common/autotest_common.sh@712 -- # xtrace_disable 00:04:29.191 04:01:43 -- common/autotest_common.sh@10 -- # set +x 00:04:29.191 04:01:43 -- json_config/json_config.sh@48 -- # local ret=0 00:04:29.191 04:01:43 -- json_config/json_config.sh@49 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:04:29.191 04:01:43 -- json_config/json_config.sh@49 -- # local enabled_types 00:04:29.191 04:01:43 -- json_config/json_config.sh@51 -- # jq -r '.[]' 00:04:29.191 04:01:43 -- json_config/json_config.sh@51 -- # tgt_rpc notify_get_types 00:04:29.191 04:01:43 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:04:29.191 04:01:43 -- json_config/json_config.sh@51 -- # get_types=('bdev_register' 'bdev_unregister') 00:04:29.191 04:01:43 -- json_config/json_config.sh@51 -- # local get_types 00:04:29.192 04:01:43 -- json_config/json_config.sh@52 -- # [[ bdev_register bdev_unregister != \b\d\e\v\_\r\e\g\i\s\t\e\r\ \b\d\e\v\_\u\n\r\e\g\i\s\t\e\r ]] 00:04:29.192 04:01:43 -- json_config/json_config.sh@57 -- # timing_exit tgt_check_notification_types 00:04:29.192 04:01:43 -- common/autotest_common.sh@718 -- # xtrace_disable 00:04:29.192 04:01:43 -- common/autotest_common.sh@10 -- # set +x 00:04:29.192 04:01:43 -- json_config/json_config.sh@58 -- # return 0 00:04:29.192 04:01:43 -- json_config/json_config.sh@331 -- # [[ 0 -eq 1 ]] 00:04:29.192 04:01:43 -- json_config/json_config.sh@335 -- # [[ 0 -eq 1 ]] 00:04:29.192 04:01:43 -- json_config/json_config.sh@339 -- # [[ 0 -eq 1 ]] 00:04:29.192 04:01:43 -- json_config/json_config.sh@343 -- # [[ 1 -eq 1 ]] 00:04:29.192 04:01:43 -- json_config/json_config.sh@344 -- # create_nvmf_subsystem_config 00:04:29.192 04:01:43 -- json_config/json_config.sh@283 -- # timing_enter create_nvmf_subsystem_config 00:04:29.192 04:01:43 -- common/autotest_common.sh@712 -- # xtrace_disable 00:04:29.192 04:01:43 -- common/autotest_common.sh@10 -- # set +x 00:04:29.192 04:01:43 -- json_config/json_config.sh@285 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:04:29.192 04:01:43 -- json_config/json_config.sh@286 -- # [[ tcp == \r\d\m\a ]] 00:04:29.192 04:01:43 -- json_config/json_config.sh@290 -- # [[ -z 127.0.0.1 ]] 00:04:29.192 04:01:43 -- json_config/json_config.sh@295 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:04:29.192 04:01:43 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:04:29.192 MallocForNvmf0 00:04:29.192 04:01:43 -- json_config/json_config.sh@296 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:04:29.192 04:01:43 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:04:29.192 MallocForNvmf1 00:04:29.192 04:01:43 -- json_config/json_config.sh@298 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:04:29.192 04:01:43 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:04:29.451 [2024-05-14 04:01:43.822619] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:29.451 04:01:43 -- json_config/json_config.sh@299 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:04:29.451 04:01:43 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:04:29.451 04:01:43 -- json_config/json_config.sh@300 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:04:29.451 04:01:43 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:04:29.709 04:01:44 -- json_config/json_config.sh@301 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:04:29.709 04:01:44 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:04:29.709 04:01:44 -- json_config/json_config.sh@302 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:04:29.709 04:01:44 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:04:29.969 [2024-05-14 04:01:44.339054] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:04:29.969 04:01:44 -- json_config/json_config.sh@304 -- # timing_exit create_nvmf_subsystem_config 00:04:29.969 04:01:44 -- common/autotest_common.sh@718 -- # xtrace_disable 00:04:29.969 04:01:44 -- common/autotest_common.sh@10 -- # set +x 00:04:29.969 04:01:44 -- json_config/json_config.sh@346 -- # timing_exit json_config_setup_target 00:04:29.969 04:01:44 -- common/autotest_common.sh@718 -- # xtrace_disable 00:04:29.969 04:01:44 -- common/autotest_common.sh@10 -- # set +x 00:04:29.969 04:01:44 -- json_config/json_config.sh@348 -- # [[ 0 -eq 1 ]] 00:04:29.969 04:01:44 -- json_config/json_config.sh@353 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:04:29.969 04:01:44 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:04:30.228 MallocBdevForConfigChangeCheck 00:04:30.228 04:01:44 -- json_config/json_config.sh@355 -- # timing_exit json_config_test_init 00:04:30.228 04:01:44 -- common/autotest_common.sh@718 -- # xtrace_disable 00:04:30.228 04:01:44 -- common/autotest_common.sh@10 -- # set +x 00:04:30.228 04:01:44 -- json_config/json_config.sh@422 -- # tgt_rpc save_config 00:04:30.228 04:01:44 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:30.487 04:01:44 -- json_config/json_config.sh@424 -- # echo 'INFO: shutting down applications...' 00:04:30.487 INFO: shutting down applications... 00:04:30.487 04:01:44 -- json_config/json_config.sh@425 -- # [[ 0 -eq 1 ]] 00:04:30.487 04:01:44 -- json_config/json_config.sh@431 -- # json_config_clear target 00:04:30.487 04:01:44 -- json_config/json_config.sh@385 -- # [[ -n 22 ]] 00:04:30.487 04:01:44 -- json_config/json_config.sh@386 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:04:35.763 Calling clear_iscsi_subsystem 00:04:35.763 Calling clear_nvmf_subsystem 00:04:35.763 Calling clear_nbd_subsystem 00:04:35.764 Calling clear_ublk_subsystem 00:04:35.764 Calling clear_vhost_blk_subsystem 00:04:35.764 Calling clear_vhost_scsi_subsystem 00:04:35.764 Calling clear_scheduler_subsystem 00:04:35.764 Calling clear_bdev_subsystem 00:04:35.764 Calling clear_accel_subsystem 00:04:35.764 Calling clear_vmd_subsystem 00:04:35.764 Calling clear_sock_subsystem 00:04:35.764 Calling clear_iobuf_subsystem 00:04:35.764 04:01:49 -- json_config/json_config.sh@390 -- # local config_filter=/var/jenkins/workspace/dsa-phy-autotest/spdk/test/json_config/config_filter.py 00:04:35.764 04:01:49 -- json_config/json_config.sh@396 -- # count=100 00:04:35.764 04:01:49 -- json_config/json_config.sh@397 -- # '[' 100 -gt 0 ']' 00:04:35.764 04:01:49 -- json_config/json_config.sh@398 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:35.764 04:01:49 -- json_config/json_config.sh@398 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/json_config/config_filter.py -method check_empty 00:04:35.764 04:01:49 -- json_config/json_config.sh@398 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:04:35.764 04:01:49 -- json_config/json_config.sh@398 -- # break 00:04:35.764 04:01:49 -- json_config/json_config.sh@403 -- # '[' 100 -eq 0 ']' 00:04:35.764 04:01:49 -- json_config/json_config.sh@432 -- # json_config_test_shutdown_app target 00:04:35.764 04:01:49 -- json_config/json_config.sh@120 -- # local app=target 00:04:35.764 04:01:49 -- json_config/json_config.sh@123 -- # [[ -n 22 ]] 00:04:35.764 04:01:49 -- json_config/json_config.sh@124 -- # [[ -n 3790816 ]] 00:04:35.764 04:01:49 -- json_config/json_config.sh@127 -- # kill -SIGINT 3790816 00:04:35.764 04:01:49 -- json_config/json_config.sh@129 -- # (( i = 0 )) 00:04:35.764 04:01:49 -- json_config/json_config.sh@129 -- # (( i < 30 )) 00:04:35.764 04:01:49 -- json_config/json_config.sh@130 -- # kill -0 3790816 00:04:35.764 04:01:49 -- json_config/json_config.sh@134 -- # sleep 0.5 00:04:36.025 04:01:50 -- json_config/json_config.sh@129 -- # (( i++ )) 00:04:36.025 04:01:50 -- json_config/json_config.sh@129 -- # (( i < 30 )) 00:04:36.025 04:01:50 -- json_config/json_config.sh@130 -- # kill -0 3790816 00:04:36.025 04:01:50 -- json_config/json_config.sh@131 -- # app_pid[$app]= 00:04:36.025 04:01:50 -- json_config/json_config.sh@132 -- # break 00:04:36.025 04:01:50 -- json_config/json_config.sh@137 -- # [[ -n '' ]] 00:04:36.025 04:01:50 -- json_config/json_config.sh@142 -- # echo 'SPDK target shutdown done' 00:04:36.025 SPDK target shutdown done 00:04:36.025 04:01:50 -- json_config/json_config.sh@434 -- # echo 'INFO: relaunching applications...' 00:04:36.025 INFO: relaunching applications... 00:04:36.025 04:01:50 -- json_config/json_config.sh@435 -- # json_config_test_start_app target --json /var/jenkins/workspace/dsa-phy-autotest/spdk/spdk_tgt_config.json 00:04:36.025 04:01:50 -- json_config/json_config.sh@98 -- # local app=target 00:04:36.025 04:01:50 -- json_config/json_config.sh@99 -- # shift 00:04:36.025 04:01:50 -- json_config/json_config.sh@101 -- # [[ -n 22 ]] 00:04:36.025 04:01:50 -- json_config/json_config.sh@102 -- # [[ -z '' ]] 00:04:36.025 04:01:50 -- json_config/json_config.sh@104 -- # local app_extra_params= 00:04:36.025 04:01:50 -- json_config/json_config.sh@105 -- # [[ 0 -eq 1 ]] 00:04:36.025 04:01:50 -- json_config/json_config.sh@105 -- # [[ 0 -eq 1 ]] 00:04:36.025 04:01:50 -- json_config/json_config.sh@111 -- # app_pid[$app]=3793677 00:04:36.025 04:01:50 -- json_config/json_config.sh@113 -- # echo 'Waiting for target to run...' 00:04:36.025 Waiting for target to run... 00:04:36.025 04:01:50 -- json_config/json_config.sh@114 -- # waitforlisten 3793677 /var/tmp/spdk_tgt.sock 00:04:36.025 04:01:50 -- common/autotest_common.sh@819 -- # '[' -z 3793677 ']' 00:04:36.025 04:01:50 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:36.025 04:01:50 -- common/autotest_common.sh@824 -- # local max_retries=100 00:04:36.025 04:01:50 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:36.025 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:36.025 04:01:50 -- common/autotest_common.sh@828 -- # xtrace_disable 00:04:36.025 04:01:50 -- common/autotest_common.sh@10 -- # set +x 00:04:36.025 04:01:50 -- json_config/json_config.sh@110 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/dsa-phy-autotest/spdk/spdk_tgt_config.json 00:04:36.025 [2024-05-14 04:01:50.539807] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:04:36.025 [2024-05-14 04:01:50.539949] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3793677 ] 00:04:36.285 EAL: No free 2048 kB hugepages reported on node 1 00:04:36.546 [2024-05-14 04:01:51.036209] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:36.546 [2024-05-14 04:01:51.125889] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:04:36.546 [2024-05-14 04:01:51.126085] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:43.125 [2024-05-14 04:01:57.187207] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:43.125 [2024-05-14 04:01:57.219444] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:04:43.125 04:01:57 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:04:43.125 04:01:57 -- common/autotest_common.sh@852 -- # return 0 00:04:43.125 04:01:57 -- json_config/json_config.sh@115 -- # echo '' 00:04:43.125 00:04:43.125 04:01:57 -- json_config/json_config.sh@436 -- # [[ 0 -eq 1 ]] 00:04:43.125 04:01:57 -- json_config/json_config.sh@440 -- # echo 'INFO: Checking if target configuration is the same...' 00:04:43.125 INFO: Checking if target configuration is the same... 00:04:43.125 04:01:57 -- json_config/json_config.sh@441 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/dsa-phy-autotest/spdk/spdk_tgt_config.json 00:04:43.125 04:01:57 -- json_config/json_config.sh@441 -- # tgt_rpc save_config 00:04:43.125 04:01:57 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:43.125 + '[' 2 -ne 2 ']' 00:04:43.125 +++ dirname /var/jenkins/workspace/dsa-phy-autotest/spdk/test/json_config/json_diff.sh 00:04:43.125 ++ readlink -f /var/jenkins/workspace/dsa-phy-autotest/spdk/test/json_config/../.. 00:04:43.125 + rootdir=/var/jenkins/workspace/dsa-phy-autotest/spdk 00:04:43.125 +++ basename /dev/fd/62 00:04:43.125 ++ mktemp /tmp/62.XXX 00:04:43.125 + tmp_file_1=/tmp/62.sj8 00:04:43.125 +++ basename /var/jenkins/workspace/dsa-phy-autotest/spdk/spdk_tgt_config.json 00:04:43.125 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:04:43.125 + tmp_file_2=/tmp/spdk_tgt_config.json.6iH 00:04:43.125 + ret=0 00:04:43.125 + /var/jenkins/workspace/dsa-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:04:43.423 + /var/jenkins/workspace/dsa-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:04:43.423 + diff -u /tmp/62.sj8 /tmp/spdk_tgt_config.json.6iH 00:04:43.423 + echo 'INFO: JSON config files are the same' 00:04:43.423 INFO: JSON config files are the same 00:04:43.423 + rm /tmp/62.sj8 /tmp/spdk_tgt_config.json.6iH 00:04:43.423 + exit 0 00:04:43.423 04:01:57 -- json_config/json_config.sh@442 -- # [[ 0 -eq 1 ]] 00:04:43.423 04:01:57 -- json_config/json_config.sh@447 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:04:43.423 INFO: changing configuration and checking if this can be detected... 00:04:43.423 04:01:57 -- json_config/json_config.sh@449 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:04:43.423 04:01:57 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:04:43.712 04:01:58 -- json_config/json_config.sh@450 -- # tgt_rpc save_config 00:04:43.712 04:01:58 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:43.712 04:01:58 -- json_config/json_config.sh@450 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/dsa-phy-autotest/spdk/spdk_tgt_config.json 00:04:43.712 + '[' 2 -ne 2 ']' 00:04:43.712 +++ dirname /var/jenkins/workspace/dsa-phy-autotest/spdk/test/json_config/json_diff.sh 00:04:43.712 ++ readlink -f /var/jenkins/workspace/dsa-phy-autotest/spdk/test/json_config/../.. 00:04:43.712 + rootdir=/var/jenkins/workspace/dsa-phy-autotest/spdk 00:04:43.712 +++ basename /dev/fd/62 00:04:43.712 ++ mktemp /tmp/62.XXX 00:04:43.712 + tmp_file_1=/tmp/62.f4p 00:04:43.712 +++ basename /var/jenkins/workspace/dsa-phy-autotest/spdk/spdk_tgt_config.json 00:04:43.712 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:04:43.712 + tmp_file_2=/tmp/spdk_tgt_config.json.OxP 00:04:43.712 + ret=0 00:04:43.712 + /var/jenkins/workspace/dsa-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:04:43.971 + /var/jenkins/workspace/dsa-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:04:43.971 + diff -u /tmp/62.f4p /tmp/spdk_tgt_config.json.OxP 00:04:43.971 + ret=1 00:04:43.971 + echo '=== Start of file: /tmp/62.f4p ===' 00:04:43.971 + cat /tmp/62.f4p 00:04:43.971 + echo '=== End of file: /tmp/62.f4p ===' 00:04:43.971 + echo '' 00:04:43.971 + echo '=== Start of file: /tmp/spdk_tgt_config.json.OxP ===' 00:04:43.971 + cat /tmp/spdk_tgt_config.json.OxP 00:04:43.971 + echo '=== End of file: /tmp/spdk_tgt_config.json.OxP ===' 00:04:43.971 + echo '' 00:04:43.971 + rm /tmp/62.f4p /tmp/spdk_tgt_config.json.OxP 00:04:43.971 + exit 1 00:04:43.971 04:01:58 -- json_config/json_config.sh@454 -- # echo 'INFO: configuration change detected.' 00:04:43.971 INFO: configuration change detected. 00:04:43.971 04:01:58 -- json_config/json_config.sh@457 -- # json_config_test_fini 00:04:43.971 04:01:58 -- json_config/json_config.sh@359 -- # timing_enter json_config_test_fini 00:04:43.971 04:01:58 -- common/autotest_common.sh@712 -- # xtrace_disable 00:04:43.971 04:01:58 -- common/autotest_common.sh@10 -- # set +x 00:04:43.971 04:01:58 -- json_config/json_config.sh@360 -- # local ret=0 00:04:43.971 04:01:58 -- json_config/json_config.sh@362 -- # [[ -n '' ]] 00:04:43.971 04:01:58 -- json_config/json_config.sh@370 -- # [[ -n 3793677 ]] 00:04:43.971 04:01:58 -- json_config/json_config.sh@373 -- # cleanup_bdev_subsystem_config 00:04:43.971 04:01:58 -- json_config/json_config.sh@237 -- # timing_enter cleanup_bdev_subsystem_config 00:04:43.971 04:01:58 -- common/autotest_common.sh@712 -- # xtrace_disable 00:04:43.971 04:01:58 -- common/autotest_common.sh@10 -- # set +x 00:04:43.971 04:01:58 -- json_config/json_config.sh@239 -- # [[ 0 -eq 1 ]] 00:04:43.971 04:01:58 -- json_config/json_config.sh@246 -- # uname -s 00:04:43.971 04:01:58 -- json_config/json_config.sh@246 -- # [[ Linux = Linux ]] 00:04:43.971 04:01:58 -- json_config/json_config.sh@247 -- # rm -f /sample_aio 00:04:43.971 04:01:58 -- json_config/json_config.sh@250 -- # [[ 0 -eq 1 ]] 00:04:43.971 04:01:58 -- json_config/json_config.sh@254 -- # timing_exit cleanup_bdev_subsystem_config 00:04:43.971 04:01:58 -- common/autotest_common.sh@718 -- # xtrace_disable 00:04:43.971 04:01:58 -- common/autotest_common.sh@10 -- # set +x 00:04:43.971 04:01:58 -- json_config/json_config.sh@376 -- # killprocess 3793677 00:04:43.971 04:01:58 -- common/autotest_common.sh@926 -- # '[' -z 3793677 ']' 00:04:43.971 04:01:58 -- common/autotest_common.sh@930 -- # kill -0 3793677 00:04:43.971 04:01:58 -- common/autotest_common.sh@931 -- # uname 00:04:43.971 04:01:58 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:04:43.971 04:01:58 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3793677 00:04:43.971 04:01:58 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:04:43.971 04:01:58 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:04:43.971 04:01:58 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3793677' 00:04:43.971 killing process with pid 3793677 00:04:43.971 04:01:58 -- common/autotest_common.sh@945 -- # kill 3793677 00:04:43.971 04:01:58 -- common/autotest_common.sh@950 -- # wait 3793677 00:04:47.264 04:02:01 -- json_config/json_config.sh@379 -- # rm -f /var/jenkins/workspace/dsa-phy-autotest/spdk/spdk_initiator_config.json /var/jenkins/workspace/dsa-phy-autotest/spdk/spdk_tgt_config.json 00:04:47.264 04:02:01 -- json_config/json_config.sh@380 -- # timing_exit json_config_test_fini 00:04:47.264 04:02:01 -- common/autotest_common.sh@718 -- # xtrace_disable 00:04:47.264 04:02:01 -- common/autotest_common.sh@10 -- # set +x 00:04:47.264 04:02:01 -- json_config/json_config.sh@381 -- # return 0 00:04:47.264 04:02:01 -- json_config/json_config.sh@459 -- # echo 'INFO: Success' 00:04:47.264 INFO: Success 00:04:47.264 00:04:47.264 real 0m25.285s 00:04:47.264 user 0m24.711s 00:04:47.264 sys 0m2.221s 00:04:47.264 04:02:01 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:47.264 04:02:01 -- common/autotest_common.sh@10 -- # set +x 00:04:47.264 ************************************ 00:04:47.264 END TEST json_config 00:04:47.264 ************************************ 00:04:47.264 04:02:01 -- spdk/autotest.sh@179 -- # run_test json_config_extra_key /var/jenkins/workspace/dsa-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:04:47.264 04:02:01 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:47.264 04:02:01 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:47.264 04:02:01 -- common/autotest_common.sh@10 -- # set +x 00:04:47.264 ************************************ 00:04:47.264 START TEST json_config_extra_key 00:04:47.264 ************************************ 00:04:47.264 04:02:01 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:04:47.264 04:02:01 -- json_config/json_config_extra_key.sh@9 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/common.sh 00:04:47.264 04:02:01 -- nvmf/common.sh@7 -- # uname -s 00:04:47.264 04:02:01 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:47.264 04:02:01 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:47.264 04:02:01 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:47.264 04:02:01 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:47.264 04:02:01 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:47.264 04:02:01 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:47.264 04:02:01 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:47.264 04:02:01 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:47.264 04:02:01 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:47.264 04:02:01 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:47.264 04:02:01 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80ef6226-405e-ee11-906e-a4bf01973fda 00:04:47.264 04:02:01 -- nvmf/common.sh@18 -- # NVME_HOSTID=80ef6226-405e-ee11-906e-a4bf01973fda 00:04:47.264 04:02:01 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:47.264 04:02:01 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:47.264 04:02:01 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:47.264 04:02:01 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/common.sh 00:04:47.264 04:02:01 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:47.264 04:02:01 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:47.264 04:02:01 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:47.264 04:02:01 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:47.264 04:02:01 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:47.264 04:02:01 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:47.264 04:02:01 -- paths/export.sh@5 -- # export PATH 00:04:47.264 04:02:01 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:47.264 04:02:01 -- nvmf/common.sh@46 -- # : 0 00:04:47.264 04:02:01 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:04:47.264 04:02:01 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:04:47.264 04:02:01 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:04:47.264 04:02:01 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:47.264 04:02:01 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:47.264 04:02:01 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:04:47.264 04:02:01 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:04:47.264 04:02:01 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:04:47.264 04:02:01 -- json_config/json_config_extra_key.sh@16 -- # app_pid=(['target']='') 00:04:47.264 04:02:01 -- json_config/json_config_extra_key.sh@16 -- # declare -A app_pid 00:04:47.264 04:02:01 -- json_config/json_config_extra_key.sh@17 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:04:47.264 04:02:01 -- json_config/json_config_extra_key.sh@17 -- # declare -A app_socket 00:04:47.264 04:02:01 -- json_config/json_config_extra_key.sh@18 -- # app_params=(['target']='-m 0x1 -s 1024') 00:04:47.264 04:02:01 -- json_config/json_config_extra_key.sh@18 -- # declare -A app_params 00:04:47.264 04:02:01 -- json_config/json_config_extra_key.sh@19 -- # configs_path=(['target']='/var/jenkins/workspace/dsa-phy-autotest/spdk/test/json_config/extra_key.json') 00:04:47.264 04:02:01 -- json_config/json_config_extra_key.sh@19 -- # declare -A configs_path 00:04:47.264 04:02:01 -- json_config/json_config_extra_key.sh@74 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:04:47.264 04:02:01 -- json_config/json_config_extra_key.sh@76 -- # echo 'INFO: launching applications...' 00:04:47.264 INFO: launching applications... 00:04:47.264 04:02:01 -- json_config/json_config_extra_key.sh@77 -- # json_config_test_start_app target --json /var/jenkins/workspace/dsa-phy-autotest/spdk/test/json_config/extra_key.json 00:04:47.264 04:02:01 -- json_config/json_config_extra_key.sh@24 -- # local app=target 00:04:47.265 04:02:01 -- json_config/json_config_extra_key.sh@25 -- # shift 00:04:47.265 04:02:01 -- json_config/json_config_extra_key.sh@27 -- # [[ -n 22 ]] 00:04:47.265 04:02:01 -- json_config/json_config_extra_key.sh@28 -- # [[ -z '' ]] 00:04:47.265 04:02:01 -- json_config/json_config_extra_key.sh@31 -- # app_pid[$app]=3795997 00:04:47.265 04:02:01 -- json_config/json_config_extra_key.sh@33 -- # echo 'Waiting for target to run...' 00:04:47.265 Waiting for target to run... 00:04:47.265 04:02:01 -- json_config/json_config_extra_key.sh@34 -- # waitforlisten 3795997 /var/tmp/spdk_tgt.sock 00:04:47.265 04:02:01 -- common/autotest_common.sh@819 -- # '[' -z 3795997 ']' 00:04:47.265 04:02:01 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:47.265 04:02:01 -- common/autotest_common.sh@824 -- # local max_retries=100 00:04:47.265 04:02:01 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:47.265 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:47.265 04:02:01 -- common/autotest_common.sh@828 -- # xtrace_disable 00:04:47.265 04:02:01 -- common/autotest_common.sh@10 -- # set +x 00:04:47.265 04:02:01 -- json_config/json_config_extra_key.sh@30 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/dsa-phy-autotest/spdk/test/json_config/extra_key.json 00:04:47.265 [2024-05-14 04:02:01.756781] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:04:47.265 [2024-05-14 04:02:01.756936] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3795997 ] 00:04:47.265 EAL: No free 2048 kB hugepages reported on node 1 00:04:47.836 [2024-05-14 04:02:02.262735] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:47.836 [2024-05-14 04:02:02.349339] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:04:47.836 [2024-05-14 04:02:02.349531] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:48.775 04:02:03 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:04:48.775 04:02:03 -- common/autotest_common.sh@852 -- # return 0 00:04:48.775 04:02:03 -- json_config/json_config_extra_key.sh@35 -- # echo '' 00:04:48.775 00:04:48.775 04:02:03 -- json_config/json_config_extra_key.sh@79 -- # echo 'INFO: shutting down applications...' 00:04:48.775 INFO: shutting down applications... 00:04:48.775 04:02:03 -- json_config/json_config_extra_key.sh@80 -- # json_config_test_shutdown_app target 00:04:48.775 04:02:03 -- json_config/json_config_extra_key.sh@40 -- # local app=target 00:04:48.775 04:02:03 -- json_config/json_config_extra_key.sh@43 -- # [[ -n 22 ]] 00:04:48.775 04:02:03 -- json_config/json_config_extra_key.sh@44 -- # [[ -n 3795997 ]] 00:04:48.775 04:02:03 -- json_config/json_config_extra_key.sh@47 -- # kill -SIGINT 3795997 00:04:48.775 04:02:03 -- json_config/json_config_extra_key.sh@49 -- # (( i = 0 )) 00:04:48.775 04:02:03 -- json_config/json_config_extra_key.sh@49 -- # (( i < 30 )) 00:04:48.775 04:02:03 -- json_config/json_config_extra_key.sh@50 -- # kill -0 3795997 00:04:48.775 04:02:03 -- json_config/json_config_extra_key.sh@54 -- # sleep 0.5 00:04:49.345 04:02:03 -- json_config/json_config_extra_key.sh@49 -- # (( i++ )) 00:04:49.345 04:02:03 -- json_config/json_config_extra_key.sh@49 -- # (( i < 30 )) 00:04:49.345 04:02:03 -- json_config/json_config_extra_key.sh@50 -- # kill -0 3795997 00:04:49.345 04:02:03 -- json_config/json_config_extra_key.sh@54 -- # sleep 0.5 00:04:49.604 04:02:04 -- json_config/json_config_extra_key.sh@49 -- # (( i++ )) 00:04:49.604 04:02:04 -- json_config/json_config_extra_key.sh@49 -- # (( i < 30 )) 00:04:49.604 04:02:04 -- json_config/json_config_extra_key.sh@50 -- # kill -0 3795997 00:04:49.604 04:02:04 -- json_config/json_config_extra_key.sh@51 -- # app_pid[$app]= 00:04:49.604 04:02:04 -- json_config/json_config_extra_key.sh@52 -- # break 00:04:49.604 04:02:04 -- json_config/json_config_extra_key.sh@57 -- # [[ -n '' ]] 00:04:49.604 04:02:04 -- json_config/json_config_extra_key.sh@62 -- # echo 'SPDK target shutdown done' 00:04:49.604 SPDK target shutdown done 00:04:49.604 04:02:04 -- json_config/json_config_extra_key.sh@82 -- # echo Success 00:04:49.604 Success 00:04:49.604 00:04:49.604 real 0m2.602s 00:04:49.604 user 0m2.353s 00:04:49.604 sys 0m0.692s 00:04:49.604 04:02:04 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:49.604 04:02:04 -- common/autotest_common.sh@10 -- # set +x 00:04:49.604 ************************************ 00:04:49.604 END TEST json_config_extra_key 00:04:49.604 ************************************ 00:04:49.863 04:02:04 -- spdk/autotest.sh@180 -- # run_test alias_rpc /var/jenkins/workspace/dsa-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:04:49.863 04:02:04 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:49.863 04:02:04 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:49.863 04:02:04 -- common/autotest_common.sh@10 -- # set +x 00:04:49.863 ************************************ 00:04:49.863 START TEST alias_rpc 00:04:49.863 ************************************ 00:04:49.863 04:02:04 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:04:49.863 * Looking for test storage... 00:04:49.863 * Found test storage at /var/jenkins/workspace/dsa-phy-autotest/spdk/test/json_config/alias_rpc 00:04:49.863 04:02:04 -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:04:49.863 04:02:04 -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=3796580 00:04:49.863 04:02:04 -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 3796580 00:04:49.863 04:02:04 -- common/autotest_common.sh@819 -- # '[' -z 3796580 ']' 00:04:49.863 04:02:04 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:49.863 04:02:04 -- common/autotest_common.sh@824 -- # local max_retries=100 00:04:49.863 04:02:04 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:49.863 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:49.863 04:02:04 -- common/autotest_common.sh@828 -- # xtrace_disable 00:04:49.863 04:02:04 -- alias_rpc/alias_rpc.sh@12 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/spdk_tgt 00:04:49.863 04:02:04 -- common/autotest_common.sh@10 -- # set +x 00:04:49.863 [2024-05-14 04:02:04.374909] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:04:49.863 [2024-05-14 04:02:04.375031] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3796580 ] 00:04:49.863 EAL: No free 2048 kB hugepages reported on node 1 00:04:50.122 [2024-05-14 04:02:04.465701] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:50.122 [2024-05-14 04:02:04.557178] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:04:50.122 [2024-05-14 04:02:04.557357] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:50.689 04:02:05 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:04:50.689 04:02:05 -- common/autotest_common.sh@852 -- # return 0 00:04:50.689 04:02:05 -- alias_rpc/alias_rpc.sh@17 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py load_config -i 00:04:50.948 04:02:05 -- alias_rpc/alias_rpc.sh@19 -- # killprocess 3796580 00:04:50.948 04:02:05 -- common/autotest_common.sh@926 -- # '[' -z 3796580 ']' 00:04:50.948 04:02:05 -- common/autotest_common.sh@930 -- # kill -0 3796580 00:04:50.948 04:02:05 -- common/autotest_common.sh@931 -- # uname 00:04:50.948 04:02:05 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:04:50.948 04:02:05 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3796580 00:04:50.948 04:02:05 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:04:50.948 04:02:05 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:04:50.948 04:02:05 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3796580' 00:04:50.948 killing process with pid 3796580 00:04:50.948 04:02:05 -- common/autotest_common.sh@945 -- # kill 3796580 00:04:50.948 04:02:05 -- common/autotest_common.sh@950 -- # wait 3796580 00:04:51.886 00:04:51.886 real 0m1.957s 00:04:51.886 user 0m2.011s 00:04:51.886 sys 0m0.422s 00:04:51.886 04:02:06 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:51.886 04:02:06 -- common/autotest_common.sh@10 -- # set +x 00:04:51.886 ************************************ 00:04:51.886 END TEST alias_rpc 00:04:51.886 ************************************ 00:04:51.886 04:02:06 -- spdk/autotest.sh@182 -- # [[ 0 -eq 0 ]] 00:04:51.886 04:02:06 -- spdk/autotest.sh@183 -- # run_test spdkcli_tcp /var/jenkins/workspace/dsa-phy-autotest/spdk/test/spdkcli/tcp.sh 00:04:51.886 04:02:06 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:51.886 04:02:06 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:51.886 04:02:06 -- common/autotest_common.sh@10 -- # set +x 00:04:51.886 ************************************ 00:04:51.886 START TEST spdkcli_tcp 00:04:51.886 ************************************ 00:04:51.886 04:02:06 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/spdkcli/tcp.sh 00:04:51.886 * Looking for test storage... 00:04:51.886 * Found test storage at /var/jenkins/workspace/dsa-phy-autotest/spdk/test/spdkcli 00:04:51.886 04:02:06 -- spdkcli/tcp.sh@9 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/test/spdkcli/common.sh 00:04:51.886 04:02:06 -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/dsa-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:04:51.886 04:02:06 -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/dsa-phy-autotest/spdk/test/json_config/clear_config.py 00:04:51.886 04:02:06 -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:04:51.886 04:02:06 -- spdkcli/tcp.sh@19 -- # PORT=9998 00:04:51.886 04:02:06 -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:04:51.886 04:02:06 -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:04:51.886 04:02:06 -- common/autotest_common.sh@712 -- # xtrace_disable 00:04:51.886 04:02:06 -- common/autotest_common.sh@10 -- # set +x 00:04:51.886 04:02:06 -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=3797106 00:04:51.886 04:02:06 -- spdkcli/tcp.sh@27 -- # waitforlisten 3797106 00:04:51.886 04:02:06 -- common/autotest_common.sh@819 -- # '[' -z 3797106 ']' 00:04:51.886 04:02:06 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:51.886 04:02:06 -- common/autotest_common.sh@824 -- # local max_retries=100 00:04:51.886 04:02:06 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:51.886 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:51.886 04:02:06 -- common/autotest_common.sh@828 -- # xtrace_disable 00:04:51.886 04:02:06 -- common/autotest_common.sh@10 -- # set +x 00:04:51.886 04:02:06 -- spdkcli/tcp.sh@24 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:04:51.886 [2024-05-14 04:02:06.400490] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:04:51.886 [2024-05-14 04:02:06.400646] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3797106 ] 00:04:52.145 EAL: No free 2048 kB hugepages reported on node 1 00:04:52.145 [2024-05-14 04:02:06.533143] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:52.145 [2024-05-14 04:02:06.625081] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:04:52.145 [2024-05-14 04:02:06.625344] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:52.145 [2024-05-14 04:02:06.625353] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:04:52.715 04:02:07 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:04:52.715 04:02:07 -- common/autotest_common.sh@852 -- # return 0 00:04:52.716 04:02:07 -- spdkcli/tcp.sh@31 -- # socat_pid=3797243 00:04:52.716 04:02:07 -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:04:52.716 04:02:07 -- spdkcli/tcp.sh@33 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:04:52.716 [ 00:04:52.716 "bdev_malloc_delete", 00:04:52.716 "bdev_malloc_create", 00:04:52.716 "bdev_null_resize", 00:04:52.716 "bdev_null_delete", 00:04:52.716 "bdev_null_create", 00:04:52.716 "bdev_nvme_cuse_unregister", 00:04:52.716 "bdev_nvme_cuse_register", 00:04:52.716 "bdev_opal_new_user", 00:04:52.716 "bdev_opal_set_lock_state", 00:04:52.716 "bdev_opal_delete", 00:04:52.716 "bdev_opal_get_info", 00:04:52.716 "bdev_opal_create", 00:04:52.716 "bdev_nvme_opal_revert", 00:04:52.716 "bdev_nvme_opal_init", 00:04:52.716 "bdev_nvme_send_cmd", 00:04:52.716 "bdev_nvme_get_path_iostat", 00:04:52.716 "bdev_nvme_get_mdns_discovery_info", 00:04:52.716 "bdev_nvme_stop_mdns_discovery", 00:04:52.716 "bdev_nvme_start_mdns_discovery", 00:04:52.716 "bdev_nvme_set_multipath_policy", 00:04:52.716 "bdev_nvme_set_preferred_path", 00:04:52.716 "bdev_nvme_get_io_paths", 00:04:52.716 "bdev_nvme_remove_error_injection", 00:04:52.716 "bdev_nvme_add_error_injection", 00:04:52.716 "bdev_nvme_get_discovery_info", 00:04:52.716 "bdev_nvme_stop_discovery", 00:04:52.716 "bdev_nvme_start_discovery", 00:04:52.716 "bdev_nvme_get_controller_health_info", 00:04:52.716 "bdev_nvme_disable_controller", 00:04:52.716 "bdev_nvme_enable_controller", 00:04:52.716 "bdev_nvme_reset_controller", 00:04:52.716 "bdev_nvme_get_transport_statistics", 00:04:52.716 "bdev_nvme_apply_firmware", 00:04:52.716 "bdev_nvme_detach_controller", 00:04:52.716 "bdev_nvme_get_controllers", 00:04:52.716 "bdev_nvme_attach_controller", 00:04:52.716 "bdev_nvme_set_hotplug", 00:04:52.716 "bdev_nvme_set_options", 00:04:52.716 "bdev_passthru_delete", 00:04:52.716 "bdev_passthru_create", 00:04:52.716 "bdev_lvol_grow_lvstore", 00:04:52.716 "bdev_lvol_get_lvols", 00:04:52.716 "bdev_lvol_get_lvstores", 00:04:52.716 "bdev_lvol_delete", 00:04:52.716 "bdev_lvol_set_read_only", 00:04:52.716 "bdev_lvol_resize", 00:04:52.716 "bdev_lvol_decouple_parent", 00:04:52.716 "bdev_lvol_inflate", 00:04:52.716 "bdev_lvol_rename", 00:04:52.716 "bdev_lvol_clone_bdev", 00:04:52.716 "bdev_lvol_clone", 00:04:52.716 "bdev_lvol_snapshot", 00:04:52.716 "bdev_lvol_create", 00:04:52.716 "bdev_lvol_delete_lvstore", 00:04:52.716 "bdev_lvol_rename_lvstore", 00:04:52.716 "bdev_lvol_create_lvstore", 00:04:52.716 "bdev_raid_set_options", 00:04:52.716 "bdev_raid_remove_base_bdev", 00:04:52.716 "bdev_raid_add_base_bdev", 00:04:52.716 "bdev_raid_delete", 00:04:52.716 "bdev_raid_create", 00:04:52.716 "bdev_raid_get_bdevs", 00:04:52.716 "bdev_error_inject_error", 00:04:52.716 "bdev_error_delete", 00:04:52.716 "bdev_error_create", 00:04:52.716 "bdev_split_delete", 00:04:52.716 "bdev_split_create", 00:04:52.716 "bdev_delay_delete", 00:04:52.716 "bdev_delay_create", 00:04:52.716 "bdev_delay_update_latency", 00:04:52.716 "bdev_zone_block_delete", 00:04:52.716 "bdev_zone_block_create", 00:04:52.716 "blobfs_create", 00:04:52.716 "blobfs_detect", 00:04:52.716 "blobfs_set_cache_size", 00:04:52.716 "bdev_aio_delete", 00:04:52.716 "bdev_aio_rescan", 00:04:52.716 "bdev_aio_create", 00:04:52.716 "bdev_ftl_set_property", 00:04:52.716 "bdev_ftl_get_properties", 00:04:52.716 "bdev_ftl_get_stats", 00:04:52.716 "bdev_ftl_unmap", 00:04:52.716 "bdev_ftl_unload", 00:04:52.716 "bdev_ftl_delete", 00:04:52.716 "bdev_ftl_load", 00:04:52.716 "bdev_ftl_create", 00:04:52.716 "bdev_virtio_attach_controller", 00:04:52.716 "bdev_virtio_scsi_get_devices", 00:04:52.716 "bdev_virtio_detach_controller", 00:04:52.716 "bdev_virtio_blk_set_hotplug", 00:04:52.716 "bdev_iscsi_delete", 00:04:52.716 "bdev_iscsi_create", 00:04:52.716 "bdev_iscsi_set_options", 00:04:52.716 "accel_error_inject_error", 00:04:52.716 "ioat_scan_accel_module", 00:04:52.716 "dsa_scan_accel_module", 00:04:52.716 "iaa_scan_accel_module", 00:04:52.716 "iscsi_set_options", 00:04:52.716 "iscsi_get_auth_groups", 00:04:52.716 "iscsi_auth_group_remove_secret", 00:04:52.716 "iscsi_auth_group_add_secret", 00:04:52.716 "iscsi_delete_auth_group", 00:04:52.716 "iscsi_create_auth_group", 00:04:52.716 "iscsi_set_discovery_auth", 00:04:52.716 "iscsi_get_options", 00:04:52.716 "iscsi_target_node_request_logout", 00:04:52.716 "iscsi_target_node_set_redirect", 00:04:52.716 "iscsi_target_node_set_auth", 00:04:52.716 "iscsi_target_node_add_lun", 00:04:52.716 "iscsi_get_connections", 00:04:52.716 "iscsi_portal_group_set_auth", 00:04:52.716 "iscsi_start_portal_group", 00:04:52.716 "iscsi_delete_portal_group", 00:04:52.716 "iscsi_create_portal_group", 00:04:52.716 "iscsi_get_portal_groups", 00:04:52.716 "iscsi_delete_target_node", 00:04:52.716 "iscsi_target_node_remove_pg_ig_maps", 00:04:52.716 "iscsi_target_node_add_pg_ig_maps", 00:04:52.716 "iscsi_create_target_node", 00:04:52.716 "iscsi_get_target_nodes", 00:04:52.716 "iscsi_delete_initiator_group", 00:04:52.716 "iscsi_initiator_group_remove_initiators", 00:04:52.716 "iscsi_initiator_group_add_initiators", 00:04:52.716 "iscsi_create_initiator_group", 00:04:52.716 "iscsi_get_initiator_groups", 00:04:52.716 "nvmf_set_crdt", 00:04:52.716 "nvmf_set_config", 00:04:52.716 "nvmf_set_max_subsystems", 00:04:52.716 "nvmf_subsystem_get_listeners", 00:04:52.716 "nvmf_subsystem_get_qpairs", 00:04:52.716 "nvmf_subsystem_get_controllers", 00:04:52.716 "nvmf_get_stats", 00:04:52.716 "nvmf_get_transports", 00:04:52.716 "nvmf_create_transport", 00:04:52.716 "nvmf_get_targets", 00:04:52.716 "nvmf_delete_target", 00:04:52.716 "nvmf_create_target", 00:04:52.716 "nvmf_subsystem_allow_any_host", 00:04:52.716 "nvmf_subsystem_remove_host", 00:04:52.716 "nvmf_subsystem_add_host", 00:04:52.716 "nvmf_subsystem_remove_ns", 00:04:52.716 "nvmf_subsystem_add_ns", 00:04:52.716 "nvmf_subsystem_listener_set_ana_state", 00:04:52.716 "nvmf_discovery_get_referrals", 00:04:52.716 "nvmf_discovery_remove_referral", 00:04:52.716 "nvmf_discovery_add_referral", 00:04:52.716 "nvmf_subsystem_remove_listener", 00:04:52.716 "nvmf_subsystem_add_listener", 00:04:52.716 "nvmf_delete_subsystem", 00:04:52.716 "nvmf_create_subsystem", 00:04:52.716 "nvmf_get_subsystems", 00:04:52.716 "env_dpdk_get_mem_stats", 00:04:52.716 "nbd_get_disks", 00:04:52.716 "nbd_stop_disk", 00:04:52.716 "nbd_start_disk", 00:04:52.716 "ublk_recover_disk", 00:04:52.716 "ublk_get_disks", 00:04:52.716 "ublk_stop_disk", 00:04:52.716 "ublk_start_disk", 00:04:52.716 "ublk_destroy_target", 00:04:52.716 "ublk_create_target", 00:04:52.716 "virtio_blk_create_transport", 00:04:52.716 "virtio_blk_get_transports", 00:04:52.716 "vhost_controller_set_coalescing", 00:04:52.716 "vhost_get_controllers", 00:04:52.716 "vhost_delete_controller", 00:04:52.716 "vhost_create_blk_controller", 00:04:52.716 "vhost_scsi_controller_remove_target", 00:04:52.716 "vhost_scsi_controller_add_target", 00:04:52.716 "vhost_start_scsi_controller", 00:04:52.716 "vhost_create_scsi_controller", 00:04:52.716 "thread_set_cpumask", 00:04:52.716 "framework_get_scheduler", 00:04:52.716 "framework_set_scheduler", 00:04:52.716 "framework_get_reactors", 00:04:52.716 "thread_get_io_channels", 00:04:52.716 "thread_get_pollers", 00:04:52.716 "thread_get_stats", 00:04:52.716 "framework_monitor_context_switch", 00:04:52.716 "spdk_kill_instance", 00:04:52.716 "log_enable_timestamps", 00:04:52.716 "log_get_flags", 00:04:52.716 "log_clear_flag", 00:04:52.716 "log_set_flag", 00:04:52.716 "log_get_level", 00:04:52.716 "log_set_level", 00:04:52.716 "log_get_print_level", 00:04:52.716 "log_set_print_level", 00:04:52.716 "framework_enable_cpumask_locks", 00:04:52.716 "framework_disable_cpumask_locks", 00:04:52.716 "framework_wait_init", 00:04:52.716 "framework_start_init", 00:04:52.716 "scsi_get_devices", 00:04:52.716 "bdev_get_histogram", 00:04:52.716 "bdev_enable_histogram", 00:04:52.716 "bdev_set_qos_limit", 00:04:52.716 "bdev_set_qd_sampling_period", 00:04:52.716 "bdev_get_bdevs", 00:04:52.716 "bdev_reset_iostat", 00:04:52.716 "bdev_get_iostat", 00:04:52.716 "bdev_examine", 00:04:52.716 "bdev_wait_for_examine", 00:04:52.716 "bdev_set_options", 00:04:52.716 "notify_get_notifications", 00:04:52.716 "notify_get_types", 00:04:52.716 "accel_get_stats", 00:04:52.716 "accel_set_options", 00:04:52.716 "accel_set_driver", 00:04:52.716 "accel_crypto_key_destroy", 00:04:52.716 "accel_crypto_keys_get", 00:04:52.716 "accel_crypto_key_create", 00:04:52.716 "accel_assign_opc", 00:04:52.716 "accel_get_module_info", 00:04:52.716 "accel_get_opc_assignments", 00:04:52.716 "vmd_rescan", 00:04:52.716 "vmd_remove_device", 00:04:52.716 "vmd_enable", 00:04:52.716 "sock_set_default_impl", 00:04:52.716 "sock_impl_set_options", 00:04:52.716 "sock_impl_get_options", 00:04:52.717 "iobuf_get_stats", 00:04:52.717 "iobuf_set_options", 00:04:52.717 "framework_get_pci_devices", 00:04:52.717 "framework_get_config", 00:04:52.717 "framework_get_subsystems", 00:04:52.717 "trace_get_info", 00:04:52.717 "trace_get_tpoint_group_mask", 00:04:52.717 "trace_disable_tpoint_group", 00:04:52.717 "trace_enable_tpoint_group", 00:04:52.717 "trace_clear_tpoint_mask", 00:04:52.717 "trace_set_tpoint_mask", 00:04:52.717 "spdk_get_version", 00:04:52.717 "rpc_get_methods" 00:04:52.717 ] 00:04:52.979 04:02:07 -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:04:52.979 04:02:07 -- common/autotest_common.sh@718 -- # xtrace_disable 00:04:52.979 04:02:07 -- common/autotest_common.sh@10 -- # set +x 00:04:52.979 04:02:07 -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:04:52.979 04:02:07 -- spdkcli/tcp.sh@38 -- # killprocess 3797106 00:04:52.979 04:02:07 -- common/autotest_common.sh@926 -- # '[' -z 3797106 ']' 00:04:52.979 04:02:07 -- common/autotest_common.sh@930 -- # kill -0 3797106 00:04:52.979 04:02:07 -- common/autotest_common.sh@931 -- # uname 00:04:52.979 04:02:07 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:04:52.979 04:02:07 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3797106 00:04:52.979 04:02:07 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:04:52.979 04:02:07 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:04:52.979 04:02:07 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3797106' 00:04:52.979 killing process with pid 3797106 00:04:52.979 04:02:07 -- common/autotest_common.sh@945 -- # kill 3797106 00:04:52.979 04:02:07 -- common/autotest_common.sh@950 -- # wait 3797106 00:04:53.919 00:04:53.919 real 0m1.985s 00:04:53.919 user 0m3.422s 00:04:53.919 sys 0m0.529s 00:04:53.919 04:02:08 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:53.919 04:02:08 -- common/autotest_common.sh@10 -- # set +x 00:04:53.919 ************************************ 00:04:53.919 END TEST spdkcli_tcp 00:04:53.919 ************************************ 00:04:53.919 04:02:08 -- spdk/autotest.sh@186 -- # run_test dpdk_mem_utility /var/jenkins/workspace/dsa-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:04:53.919 04:02:08 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:53.919 04:02:08 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:53.919 04:02:08 -- common/autotest_common.sh@10 -- # set +x 00:04:53.919 ************************************ 00:04:53.919 START TEST dpdk_mem_utility 00:04:53.919 ************************************ 00:04:53.919 04:02:08 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:04:53.919 * Looking for test storage... 00:04:53.919 * Found test storage at /var/jenkins/workspace/dsa-phy-autotest/spdk/test/dpdk_memory_utility 00:04:53.919 04:02:08 -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:04:53.919 04:02:08 -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=3797603 00:04:53.919 04:02:08 -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 3797603 00:04:53.919 04:02:08 -- common/autotest_common.sh@819 -- # '[' -z 3797603 ']' 00:04:53.919 04:02:08 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:53.919 04:02:08 -- common/autotest_common.sh@824 -- # local max_retries=100 00:04:53.919 04:02:08 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:53.919 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:53.919 04:02:08 -- common/autotest_common.sh@828 -- # xtrace_disable 00:04:53.919 04:02:08 -- common/autotest_common.sh@10 -- # set +x 00:04:53.919 04:02:08 -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/spdk_tgt 00:04:53.919 [2024-05-14 04:02:08.394210] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:04:53.919 [2024-05-14 04:02:08.394356] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3797603 ] 00:04:53.919 EAL: No free 2048 kB hugepages reported on node 1 00:04:54.180 [2024-05-14 04:02:08.525661] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:54.180 [2024-05-14 04:02:08.617298] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:04:54.180 [2024-05-14 04:02:08.617501] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:54.751 04:02:09 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:04:54.751 04:02:09 -- common/autotest_common.sh@852 -- # return 0 00:04:54.751 04:02:09 -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:04:54.751 04:02:09 -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:04:54.751 04:02:09 -- common/autotest_common.sh@551 -- # xtrace_disable 00:04:54.751 04:02:09 -- common/autotest_common.sh@10 -- # set +x 00:04:54.751 { 00:04:54.751 "filename": "/tmp/spdk_mem_dump.txt" 00:04:54.751 } 00:04:54.751 04:02:09 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:04:54.751 04:02:09 -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:04:54.751 DPDK memory size 820.000000 MiB in 1 heap(s) 00:04:54.751 1 heaps totaling size 820.000000 MiB 00:04:54.751 size: 820.000000 MiB heap id: 0 00:04:54.751 end heaps---------- 00:04:54.751 8 mempools totaling size 598.116089 MiB 00:04:54.751 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:04:54.751 size: 158.602051 MiB name: PDU_data_out_Pool 00:04:54.751 size: 84.521057 MiB name: bdev_io_3797603 00:04:54.751 size: 51.011292 MiB name: evtpool_3797603 00:04:54.751 size: 50.003479 MiB name: msgpool_3797603 00:04:54.751 size: 21.763794 MiB name: PDU_Pool 00:04:54.751 size: 19.513306 MiB name: SCSI_TASK_Pool 00:04:54.751 size: 0.026123 MiB name: Session_Pool 00:04:54.751 end mempools------- 00:04:54.751 6 memzones totaling size 4.142822 MiB 00:04:54.751 size: 1.000366 MiB name: RG_ring_0_3797603 00:04:54.751 size: 1.000366 MiB name: RG_ring_1_3797603 00:04:54.751 size: 1.000366 MiB name: RG_ring_4_3797603 00:04:54.751 size: 1.000366 MiB name: RG_ring_5_3797603 00:04:54.751 size: 0.125366 MiB name: RG_ring_2_3797603 00:04:54.751 size: 0.015991 MiB name: RG_ring_3_3797603 00:04:54.751 end memzones------- 00:04:54.751 04:02:09 -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/dpdk_mem_info.py -m 0 00:04:54.751 heap id: 0 total size: 820.000000 MiB number of busy elements: 41 number of free elements: 19 00:04:54.751 list of free elements. size: 18.514832 MiB 00:04:54.751 element at address: 0x200000400000 with size: 1.999451 MiB 00:04:54.751 element at address: 0x200000800000 with size: 1.996887 MiB 00:04:54.751 element at address: 0x200007000000 with size: 1.995972 MiB 00:04:54.751 element at address: 0x20000b200000 with size: 1.995972 MiB 00:04:54.751 element at address: 0x200019100040 with size: 0.999939 MiB 00:04:54.751 element at address: 0x200019500040 with size: 0.999939 MiB 00:04:54.751 element at address: 0x200019600000 with size: 0.999329 MiB 00:04:54.751 element at address: 0x200003e00000 with size: 0.996094 MiB 00:04:54.751 element at address: 0x200032200000 with size: 0.994324 MiB 00:04:54.751 element at address: 0x200018e00000 with size: 0.959900 MiB 00:04:54.751 element at address: 0x200019900040 with size: 0.937256 MiB 00:04:54.751 element at address: 0x200000200000 with size: 0.840942 MiB 00:04:54.751 element at address: 0x20001b000000 with size: 0.583191 MiB 00:04:54.751 element at address: 0x200019200000 with size: 0.491150 MiB 00:04:54.751 element at address: 0x200019a00000 with size: 0.485657 MiB 00:04:54.751 element at address: 0x200013800000 with size: 0.470581 MiB 00:04:54.751 element at address: 0x200028400000 with size: 0.411072 MiB 00:04:54.751 element at address: 0x200003a00000 with size: 0.356140 MiB 00:04:54.751 element at address: 0x20000b1ff040 with size: 0.001038 MiB 00:04:54.751 list of standard malloc elements. size: 199.220764 MiB 00:04:54.751 element at address: 0x20000b3fef80 with size: 132.000183 MiB 00:04:54.751 element at address: 0x2000071fef80 with size: 64.000183 MiB 00:04:54.751 element at address: 0x200018ffff80 with size: 1.000183 MiB 00:04:54.751 element at address: 0x2000193fff80 with size: 1.000183 MiB 00:04:54.751 element at address: 0x2000197fff80 with size: 1.000183 MiB 00:04:54.751 element at address: 0x2000003d9e80 with size: 0.140808 MiB 00:04:54.751 element at address: 0x2000199eff40 with size: 0.062683 MiB 00:04:54.751 element at address: 0x2000003fdf40 with size: 0.007996 MiB 00:04:54.751 element at address: 0x2000137ff040 with size: 0.000427 MiB 00:04:54.751 element at address: 0x2000137ffa00 with size: 0.000366 MiB 00:04:54.751 element at address: 0x2000002d7480 with size: 0.000244 MiB 00:04:54.751 element at address: 0x2000002d7580 with size: 0.000244 MiB 00:04:54.751 element at address: 0x2000002d7680 with size: 0.000244 MiB 00:04:54.751 element at address: 0x2000002d7900 with size: 0.000244 MiB 00:04:54.751 element at address: 0x2000002d7a00 with size: 0.000244 MiB 00:04:54.751 element at address: 0x2000002d7b00 with size: 0.000244 MiB 00:04:54.751 element at address: 0x2000003d9d80 with size: 0.000244 MiB 00:04:54.751 element at address: 0x200003aff980 with size: 0.000244 MiB 00:04:54.751 element at address: 0x200003affa80 with size: 0.000244 MiB 00:04:54.751 element at address: 0x200003eff000 with size: 0.000244 MiB 00:04:54.751 element at address: 0x20000b1ff480 with size: 0.000244 MiB 00:04:54.751 element at address: 0x20000b1ff580 with size: 0.000244 MiB 00:04:54.751 element at address: 0x20000b1ff680 with size: 0.000244 MiB 00:04:54.751 element at address: 0x20000b1ff780 with size: 0.000244 MiB 00:04:54.751 element at address: 0x20000b1ff880 with size: 0.000244 MiB 00:04:54.751 element at address: 0x20000b1ff980 with size: 0.000244 MiB 00:04:54.751 element at address: 0x20000b1ffc00 with size: 0.000244 MiB 00:04:54.751 element at address: 0x20000b1ffd00 with size: 0.000244 MiB 00:04:54.751 element at address: 0x20000b1ffe00 with size: 0.000244 MiB 00:04:54.751 element at address: 0x20000b1fff00 with size: 0.000244 MiB 00:04:54.751 element at address: 0x2000137ff200 with size: 0.000244 MiB 00:04:54.751 element at address: 0x2000137ff300 with size: 0.000244 MiB 00:04:54.752 element at address: 0x2000137ff400 with size: 0.000244 MiB 00:04:54.752 element at address: 0x2000137ff500 with size: 0.000244 MiB 00:04:54.752 element at address: 0x2000137ff600 with size: 0.000244 MiB 00:04:54.752 element at address: 0x2000137ff700 with size: 0.000244 MiB 00:04:54.752 element at address: 0x2000137ff800 with size: 0.000244 MiB 00:04:54.752 element at address: 0x2000137ff900 with size: 0.000244 MiB 00:04:54.752 element at address: 0x2000137ffb80 with size: 0.000244 MiB 00:04:54.752 element at address: 0x2000137ffc80 with size: 0.000244 MiB 00:04:54.752 element at address: 0x2000137fff00 with size: 0.000244 MiB 00:04:54.752 list of memzone associated elements. size: 602.264404 MiB 00:04:54.752 element at address: 0x20001b0954c0 with size: 211.416809 MiB 00:04:54.752 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:04:54.752 element at address: 0x20002846ff80 with size: 157.562622 MiB 00:04:54.752 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:04:54.752 element at address: 0x2000139fab40 with size: 84.020691 MiB 00:04:54.752 associated memzone info: size: 84.020508 MiB name: MP_bdev_io_3797603_0 00:04:54.752 element at address: 0x2000009ff340 with size: 48.003113 MiB 00:04:54.752 associated memzone info: size: 48.002930 MiB name: MP_evtpool_3797603_0 00:04:54.752 element at address: 0x200003fff340 with size: 48.003113 MiB 00:04:54.752 associated memzone info: size: 48.002930 MiB name: MP_msgpool_3797603_0 00:04:54.752 element at address: 0x200019bbe900 with size: 20.255615 MiB 00:04:54.752 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:04:54.752 element at address: 0x2000323feb00 with size: 18.005127 MiB 00:04:54.752 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:04:54.752 element at address: 0x2000005ffdc0 with size: 2.000549 MiB 00:04:54.752 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_3797603 00:04:54.752 element at address: 0x200003bffdc0 with size: 2.000549 MiB 00:04:54.752 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_3797603 00:04:54.752 element at address: 0x2000002d7c00 with size: 1.008179 MiB 00:04:54.752 associated memzone info: size: 1.007996 MiB name: MP_evtpool_3797603 00:04:54.752 element at address: 0x2000192fde00 with size: 1.008179 MiB 00:04:54.752 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:04:54.752 element at address: 0x200019abc780 with size: 1.008179 MiB 00:04:54.752 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:04:54.752 element at address: 0x200018efde00 with size: 1.008179 MiB 00:04:54.752 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:04:54.752 element at address: 0x2000138f89c0 with size: 1.008179 MiB 00:04:54.752 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:04:54.752 element at address: 0x200003eff100 with size: 1.000549 MiB 00:04:54.752 associated memzone info: size: 1.000366 MiB name: RG_ring_0_3797603 00:04:54.752 element at address: 0x200003affb80 with size: 1.000549 MiB 00:04:54.752 associated memzone info: size: 1.000366 MiB name: RG_ring_1_3797603 00:04:54.752 element at address: 0x2000196ffd40 with size: 1.000549 MiB 00:04:54.752 associated memzone info: size: 1.000366 MiB name: RG_ring_4_3797603 00:04:54.752 element at address: 0x2000322fe8c0 with size: 1.000549 MiB 00:04:54.752 associated memzone info: size: 1.000366 MiB name: RG_ring_5_3797603 00:04:54.752 element at address: 0x200003a5b2c0 with size: 0.500549 MiB 00:04:54.752 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_3797603 00:04:54.752 element at address: 0x20001927dbc0 with size: 0.500549 MiB 00:04:54.752 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:04:54.752 element at address: 0x200013878780 with size: 0.500549 MiB 00:04:54.752 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:04:54.752 element at address: 0x200019a7c540 with size: 0.250549 MiB 00:04:54.752 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:04:54.752 element at address: 0x200003adf740 with size: 0.125549 MiB 00:04:54.752 associated memzone info: size: 0.125366 MiB name: RG_ring_2_3797603 00:04:54.752 element at address: 0x200018ef5bc0 with size: 0.031799 MiB 00:04:54.752 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:04:54.752 element at address: 0x2000284693c0 with size: 0.023804 MiB 00:04:54.752 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:04:54.752 element at address: 0x200003adb500 with size: 0.016174 MiB 00:04:54.752 associated memzone info: size: 0.015991 MiB name: RG_ring_3_3797603 00:04:54.752 element at address: 0x20002846f540 with size: 0.002502 MiB 00:04:54.752 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:04:54.752 element at address: 0x2000002d7780 with size: 0.000366 MiB 00:04:54.752 associated memzone info: size: 0.000183 MiB name: MP_msgpool_3797603 00:04:54.752 element at address: 0x2000137ffd80 with size: 0.000366 MiB 00:04:54.752 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_3797603 00:04:54.752 element at address: 0x20000b1ffa80 with size: 0.000366 MiB 00:04:54.752 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:04:54.752 04:02:09 -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:04:54.752 04:02:09 -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 3797603 00:04:54.752 04:02:09 -- common/autotest_common.sh@926 -- # '[' -z 3797603 ']' 00:04:54.752 04:02:09 -- common/autotest_common.sh@930 -- # kill -0 3797603 00:04:54.752 04:02:09 -- common/autotest_common.sh@931 -- # uname 00:04:54.752 04:02:09 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:04:54.752 04:02:09 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3797603 00:04:54.752 04:02:09 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:04:54.752 04:02:09 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:04:54.752 04:02:09 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3797603' 00:04:54.752 killing process with pid 3797603 00:04:54.752 04:02:09 -- common/autotest_common.sh@945 -- # kill 3797603 00:04:54.752 04:02:09 -- common/autotest_common.sh@950 -- # wait 3797603 00:04:55.694 00:04:55.694 real 0m1.924s 00:04:55.694 user 0m1.943s 00:04:55.694 sys 0m0.445s 00:04:55.694 04:02:10 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:55.694 04:02:10 -- common/autotest_common.sh@10 -- # set +x 00:04:55.694 ************************************ 00:04:55.694 END TEST dpdk_mem_utility 00:04:55.694 ************************************ 00:04:55.694 04:02:10 -- spdk/autotest.sh@187 -- # run_test event /var/jenkins/workspace/dsa-phy-autotest/spdk/test/event/event.sh 00:04:55.694 04:02:10 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:55.694 04:02:10 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:55.694 04:02:10 -- common/autotest_common.sh@10 -- # set +x 00:04:55.694 ************************************ 00:04:55.694 START TEST event 00:04:55.694 ************************************ 00:04:55.694 04:02:10 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/event/event.sh 00:04:55.694 * Looking for test storage... 00:04:55.694 * Found test storage at /var/jenkins/workspace/dsa-phy-autotest/spdk/test/event 00:04:55.694 04:02:10 -- event/event.sh@9 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/test/bdev/nbd_common.sh 00:04:55.694 04:02:10 -- bdev/nbd_common.sh@6 -- # set -e 00:04:55.694 04:02:10 -- event/event.sh@45 -- # run_test event_perf /var/jenkins/workspace/dsa-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:04:55.694 04:02:10 -- common/autotest_common.sh@1077 -- # '[' 6 -le 1 ']' 00:04:55.694 04:02:10 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:55.694 04:02:10 -- common/autotest_common.sh@10 -- # set +x 00:04:55.694 ************************************ 00:04:55.694 START TEST event_perf 00:04:55.694 ************************************ 00:04:55.694 04:02:10 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:04:55.955 Running I/O for 1 seconds...[2024-05-14 04:02:10.314635] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:04:55.955 [2024-05-14 04:02:10.314772] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3797964 ] 00:04:55.955 EAL: No free 2048 kB hugepages reported on node 1 00:04:55.955 [2024-05-14 04:02:10.437463] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:04:55.955 [2024-05-14 04:02:10.534181] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:04:55.955 [2024-05-14 04:02:10.534208] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:04:55.955 [2024-05-14 04:02:10.534316] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:55.955 [2024-05-14 04:02:10.534328] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:04:57.338 Running I/O for 1 seconds... 00:04:57.338 lcore 0: 152939 00:04:57.338 lcore 1: 152939 00:04:57.338 lcore 2: 152939 00:04:57.338 lcore 3: 152941 00:04:57.338 done. 00:04:57.338 00:04:57.338 real 0m1.413s 00:04:57.338 user 0m4.242s 00:04:57.338 sys 0m0.158s 00:04:57.338 04:02:11 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:57.338 04:02:11 -- common/autotest_common.sh@10 -- # set +x 00:04:57.338 ************************************ 00:04:57.338 END TEST event_perf 00:04:57.338 ************************************ 00:04:57.338 04:02:11 -- event/event.sh@46 -- # run_test event_reactor /var/jenkins/workspace/dsa-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:04:57.338 04:02:11 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:04:57.338 04:02:11 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:57.338 04:02:11 -- common/autotest_common.sh@10 -- # set +x 00:04:57.338 ************************************ 00:04:57.338 START TEST event_reactor 00:04:57.338 ************************************ 00:04:57.338 04:02:11 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:04:57.338 [2024-05-14 04:02:11.758800] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:04:57.338 [2024-05-14 04:02:11.758925] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3798300 ] 00:04:57.338 EAL: No free 2048 kB hugepages reported on node 1 00:04:57.338 [2024-05-14 04:02:11.872693] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:57.598 [2024-05-14 04:02:11.961881] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:58.534 test_start 00:04:58.534 oneshot 00:04:58.534 tick 100 00:04:58.534 tick 100 00:04:58.534 tick 250 00:04:58.534 tick 100 00:04:58.534 tick 100 00:04:58.534 tick 100 00:04:58.534 tick 250 00:04:58.534 tick 500 00:04:58.534 tick 100 00:04:58.534 tick 100 00:04:58.534 tick 250 00:04:58.534 tick 100 00:04:58.534 tick 100 00:04:58.534 test_end 00:04:58.534 00:04:58.534 real 0m1.384s 00:04:58.534 user 0m1.245s 00:04:58.534 sys 0m0.131s 00:04:58.534 04:02:13 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:58.534 04:02:13 -- common/autotest_common.sh@10 -- # set +x 00:04:58.534 ************************************ 00:04:58.534 END TEST event_reactor 00:04:58.534 ************************************ 00:04:58.795 04:02:13 -- event/event.sh@47 -- # run_test event_reactor_perf /var/jenkins/workspace/dsa-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:04:58.795 04:02:13 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:04:58.795 04:02:13 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:58.795 04:02:13 -- common/autotest_common.sh@10 -- # set +x 00:04:58.795 ************************************ 00:04:58.795 START TEST event_reactor_perf 00:04:58.795 ************************************ 00:04:58.795 04:02:13 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:04:58.795 [2024-05-14 04:02:13.178876] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:04:58.795 [2024-05-14 04:02:13.178999] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3798617 ] 00:04:58.795 EAL: No free 2048 kB hugepages reported on node 1 00:04:58.795 [2024-05-14 04:02:13.294774] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:59.054 [2024-05-14 04:02:13.383060] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:00.023 test_start 00:05:00.023 test_end 00:05:00.023 Performance: 416662 events per second 00:05:00.023 00:05:00.023 real 0m1.380s 00:05:00.023 user 0m1.243s 00:05:00.023 sys 0m0.130s 00:05:00.023 04:02:14 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:00.023 04:02:14 -- common/autotest_common.sh@10 -- # set +x 00:05:00.023 ************************************ 00:05:00.023 END TEST event_reactor_perf 00:05:00.023 ************************************ 00:05:00.023 04:02:14 -- event/event.sh@49 -- # uname -s 00:05:00.023 04:02:14 -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:05:00.023 04:02:14 -- event/event.sh@50 -- # run_test event_scheduler /var/jenkins/workspace/dsa-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:05:00.023 04:02:14 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:00.023 04:02:14 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:00.023 04:02:14 -- common/autotest_common.sh@10 -- # set +x 00:05:00.023 ************************************ 00:05:00.023 START TEST event_scheduler 00:05:00.023 ************************************ 00:05:00.023 04:02:14 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:05:00.283 * Looking for test storage... 00:05:00.283 * Found test storage at /var/jenkins/workspace/dsa-phy-autotest/spdk/test/event/scheduler 00:05:00.283 04:02:14 -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:05:00.283 04:02:14 -- scheduler/scheduler.sh@35 -- # scheduler_pid=3798961 00:05:00.283 04:02:14 -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:05:00.283 04:02:14 -- scheduler/scheduler.sh@34 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:05:00.283 04:02:14 -- scheduler/scheduler.sh@37 -- # waitforlisten 3798961 00:05:00.283 04:02:14 -- common/autotest_common.sh@819 -- # '[' -z 3798961 ']' 00:05:00.283 04:02:14 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:00.283 04:02:14 -- common/autotest_common.sh@824 -- # local max_retries=100 00:05:00.283 04:02:14 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:00.283 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:00.283 04:02:14 -- common/autotest_common.sh@828 -- # xtrace_disable 00:05:00.283 04:02:14 -- common/autotest_common.sh@10 -- # set +x 00:05:00.283 [2024-05-14 04:02:14.696423] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:05:00.283 [2024-05-14 04:02:14.696573] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3798961 ] 00:05:00.283 EAL: No free 2048 kB hugepages reported on node 1 00:05:00.283 [2024-05-14 04:02:14.828320] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:00.542 [2024-05-14 04:02:14.921514] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:00.542 [2024-05-14 04:02:14.921626] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:00.542 [2024-05-14 04:02:14.921632] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:05:00.542 [2024-05-14 04:02:14.921634] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:05:01.110 04:02:15 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:05:01.110 04:02:15 -- common/autotest_common.sh@852 -- # return 0 00:05:01.110 04:02:15 -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:05:01.110 04:02:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:01.110 04:02:15 -- common/autotest_common.sh@10 -- # set +x 00:05:01.110 POWER: Env isn't set yet! 00:05:01.110 POWER: Attempting to initialise ACPI cpufreq power management... 00:05:01.110 POWER: Failed to write /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:01.110 POWER: Cannot set governor of lcore 0 to userspace 00:05:01.110 POWER: Attempting to initialise PSTAT power management... 00:05:01.110 POWER: Power management governor of lcore 0 has been set to 'performance' successfully 00:05:01.110 POWER: Initialized successfully for lcore 0 power management 00:05:01.110 POWER: Power management governor of lcore 1 has been set to 'performance' successfully 00:05:01.110 POWER: Initialized successfully for lcore 1 power management 00:05:01.110 POWER: Power management governor of lcore 2 has been set to 'performance' successfully 00:05:01.110 POWER: Initialized successfully for lcore 2 power management 00:05:01.110 POWER: Power management governor of lcore 3 has been set to 'performance' successfully 00:05:01.110 POWER: Initialized successfully for lcore 3 power management 00:05:01.110 04:02:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:01.110 04:02:15 -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:05:01.110 04:02:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:01.110 04:02:15 -- common/autotest_common.sh@10 -- # set +x 00:05:01.110 [2024-05-14 04:02:15.679388] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:05:01.110 04:02:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:01.110 04:02:15 -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:05:01.110 04:02:15 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:01.110 04:02:15 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:01.110 04:02:15 -- common/autotest_common.sh@10 -- # set +x 00:05:01.110 ************************************ 00:05:01.110 START TEST scheduler_create_thread 00:05:01.110 ************************************ 00:05:01.110 04:02:15 -- common/autotest_common.sh@1104 -- # scheduler_create_thread 00:05:01.110 04:02:15 -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:05:01.110 04:02:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:01.110 04:02:15 -- common/autotest_common.sh@10 -- # set +x 00:05:01.370 2 00:05:01.370 04:02:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:01.370 04:02:15 -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:05:01.370 04:02:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:01.370 04:02:15 -- common/autotest_common.sh@10 -- # set +x 00:05:01.370 3 00:05:01.370 04:02:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:01.370 04:02:15 -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:05:01.370 04:02:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:01.370 04:02:15 -- common/autotest_common.sh@10 -- # set +x 00:05:01.370 4 00:05:01.370 04:02:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:01.370 04:02:15 -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:05:01.370 04:02:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:01.370 04:02:15 -- common/autotest_common.sh@10 -- # set +x 00:05:01.370 5 00:05:01.370 04:02:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:01.370 04:02:15 -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:05:01.370 04:02:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:01.370 04:02:15 -- common/autotest_common.sh@10 -- # set +x 00:05:01.370 6 00:05:01.370 04:02:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:01.370 04:02:15 -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:05:01.370 04:02:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:01.370 04:02:15 -- common/autotest_common.sh@10 -- # set +x 00:05:01.370 7 00:05:01.370 04:02:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:01.370 04:02:15 -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:05:01.370 04:02:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:01.370 04:02:15 -- common/autotest_common.sh@10 -- # set +x 00:05:01.370 8 00:05:01.370 04:02:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:01.370 04:02:15 -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:05:01.370 04:02:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:01.370 04:02:15 -- common/autotest_common.sh@10 -- # set +x 00:05:01.370 9 00:05:01.370 04:02:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:01.370 04:02:15 -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:05:01.370 04:02:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:01.370 04:02:15 -- common/autotest_common.sh@10 -- # set +x 00:05:01.370 10 00:05:01.370 04:02:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:01.370 04:02:15 -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:05:01.370 04:02:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:01.370 04:02:15 -- common/autotest_common.sh@10 -- # set +x 00:05:01.370 04:02:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:01.370 04:02:15 -- scheduler/scheduler.sh@22 -- # thread_id=11 00:05:01.370 04:02:15 -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:05:01.370 04:02:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:01.370 04:02:15 -- common/autotest_common.sh@10 -- # set +x 00:05:01.370 04:02:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:01.370 04:02:15 -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:05:01.370 04:02:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:01.370 04:02:15 -- common/autotest_common.sh@10 -- # set +x 00:05:01.940 04:02:16 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:01.940 04:02:16 -- scheduler/scheduler.sh@25 -- # thread_id=12 00:05:01.940 04:02:16 -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:05:01.940 04:02:16 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:01.940 04:02:16 -- common/autotest_common.sh@10 -- # set +x 00:05:02.881 04:02:17 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:02.881 00:05:02.881 real 0m1.753s 00:05:02.882 user 0m0.016s 00:05:02.882 sys 0m0.005s 00:05:02.882 04:02:17 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:02.882 04:02:17 -- common/autotest_common.sh@10 -- # set +x 00:05:02.882 ************************************ 00:05:02.882 END TEST scheduler_create_thread 00:05:02.882 ************************************ 00:05:03.142 04:02:17 -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:05:03.142 04:02:17 -- scheduler/scheduler.sh@46 -- # killprocess 3798961 00:05:03.142 04:02:17 -- common/autotest_common.sh@926 -- # '[' -z 3798961 ']' 00:05:03.142 04:02:17 -- common/autotest_common.sh@930 -- # kill -0 3798961 00:05:03.142 04:02:17 -- common/autotest_common.sh@931 -- # uname 00:05:03.142 04:02:17 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:05:03.142 04:02:17 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3798961 00:05:03.142 04:02:17 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:05:03.142 04:02:17 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:05:03.142 04:02:17 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3798961' 00:05:03.142 killing process with pid 3798961 00:05:03.142 04:02:17 -- common/autotest_common.sh@945 -- # kill 3798961 00:05:03.142 04:02:17 -- common/autotest_common.sh@950 -- # wait 3798961 00:05:03.402 [2024-05-14 04:02:17.920462] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:05:03.662 POWER: Power management governor of lcore 0 has been set to 'powersave' successfully 00:05:03.662 POWER: Power management of lcore 0 has exited from 'performance' mode and been set back to the original 00:05:03.662 POWER: Power management governor of lcore 1 has been set to 'powersave' successfully 00:05:03.662 POWER: Power management of lcore 1 has exited from 'performance' mode and been set back to the original 00:05:03.662 POWER: Power management governor of lcore 2 has been set to 'powersave' successfully 00:05:03.662 POWER: Power management of lcore 2 has exited from 'performance' mode and been set back to the original 00:05:03.662 POWER: Power management governor of lcore 3 has been set to 'powersave' successfully 00:05:03.662 POWER: Power management of lcore 3 has exited from 'performance' mode and been set back to the original 00:05:03.920 00:05:03.920 real 0m3.800s 00:05:03.920 user 0m6.362s 00:05:03.920 sys 0m0.405s 00:05:03.920 04:02:18 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:03.920 04:02:18 -- common/autotest_common.sh@10 -- # set +x 00:05:03.920 ************************************ 00:05:03.921 END TEST event_scheduler 00:05:03.921 ************************************ 00:05:03.921 04:02:18 -- event/event.sh@51 -- # modprobe -n nbd 00:05:03.921 04:02:18 -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:05:03.921 04:02:18 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:03.921 04:02:18 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:03.921 04:02:18 -- common/autotest_common.sh@10 -- # set +x 00:05:03.921 ************************************ 00:05:03.921 START TEST app_repeat 00:05:03.921 ************************************ 00:05:03.921 04:02:18 -- common/autotest_common.sh@1104 -- # app_repeat_test 00:05:03.921 04:02:18 -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:03.921 04:02:18 -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:03.921 04:02:18 -- event/event.sh@13 -- # local nbd_list 00:05:03.921 04:02:18 -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:03.921 04:02:18 -- event/event.sh@14 -- # local bdev_list 00:05:03.921 04:02:18 -- event/event.sh@15 -- # local repeat_times=4 00:05:03.921 04:02:18 -- event/event.sh@17 -- # modprobe nbd 00:05:03.921 04:02:18 -- event/event.sh@19 -- # repeat_pid=3799783 00:05:03.921 04:02:18 -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:05:03.921 04:02:18 -- event/event.sh@21 -- # echo 'Process app_repeat pid: 3799783' 00:05:03.921 Process app_repeat pid: 3799783 00:05:03.921 04:02:18 -- event/event.sh@23 -- # for i in {0..2} 00:05:03.921 04:02:18 -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:05:03.921 spdk_app_start Round 0 00:05:03.921 04:02:18 -- event/event.sh@25 -- # waitforlisten 3799783 /var/tmp/spdk-nbd.sock 00:05:03.921 04:02:18 -- common/autotest_common.sh@819 -- # '[' -z 3799783 ']' 00:05:03.921 04:02:18 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:03.921 04:02:18 -- common/autotest_common.sh@824 -- # local max_retries=100 00:05:03.921 04:02:18 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:03.921 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:03.921 04:02:18 -- common/autotest_common.sh@828 -- # xtrace_disable 00:05:03.921 04:02:18 -- common/autotest_common.sh@10 -- # set +x 00:05:03.921 04:02:18 -- event/event.sh@18 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:05:03.921 [2024-05-14 04:02:18.460310] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:05:03.921 [2024-05-14 04:02:18.460429] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3799783 ] 00:05:04.179 EAL: No free 2048 kB hugepages reported on node 1 00:05:04.179 [2024-05-14 04:02:18.579231] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:04.179 [2024-05-14 04:02:18.670752] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:04.179 [2024-05-14 04:02:18.670754] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:04.748 04:02:19 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:05:04.748 04:02:19 -- common/autotest_common.sh@852 -- # return 0 00:05:04.748 04:02:19 -- event/event.sh@27 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:05.009 Malloc0 00:05:05.009 04:02:19 -- event/event.sh@28 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:05.009 Malloc1 00:05:05.009 04:02:19 -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:05.009 04:02:19 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:05.009 04:02:19 -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:05.009 04:02:19 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:05.009 04:02:19 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:05.009 04:02:19 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:05.009 04:02:19 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:05.009 04:02:19 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:05.009 04:02:19 -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:05.009 04:02:19 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:05.009 04:02:19 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:05.009 04:02:19 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:05.009 04:02:19 -- bdev/nbd_common.sh@12 -- # local i 00:05:05.009 04:02:19 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:05.009 04:02:19 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:05.009 04:02:19 -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:05.271 /dev/nbd0 00:05:05.271 04:02:19 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:05.271 04:02:19 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:05.271 04:02:19 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:05:05.271 04:02:19 -- common/autotest_common.sh@857 -- # local i 00:05:05.271 04:02:19 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:05:05.271 04:02:19 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:05:05.271 04:02:19 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:05:05.271 04:02:19 -- common/autotest_common.sh@861 -- # break 00:05:05.271 04:02:19 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:05:05.271 04:02:19 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:05:05.271 04:02:19 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/dsa-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:05.271 1+0 records in 00:05:05.271 1+0 records out 00:05:05.271 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.0001976 s, 20.7 MB/s 00:05:05.271 04:02:19 -- common/autotest_common.sh@874 -- # stat -c %s /var/jenkins/workspace/dsa-phy-autotest/spdk/test/event/nbdtest 00:05:05.271 04:02:19 -- common/autotest_common.sh@874 -- # size=4096 00:05:05.271 04:02:19 -- common/autotest_common.sh@875 -- # rm -f /var/jenkins/workspace/dsa-phy-autotest/spdk/test/event/nbdtest 00:05:05.271 04:02:19 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:05:05.271 04:02:19 -- common/autotest_common.sh@877 -- # return 0 00:05:05.271 04:02:19 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:05.271 04:02:19 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:05.271 04:02:19 -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:05.531 /dev/nbd1 00:05:05.531 04:02:19 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:05.531 04:02:19 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:05.531 04:02:19 -- common/autotest_common.sh@856 -- # local nbd_name=nbd1 00:05:05.531 04:02:19 -- common/autotest_common.sh@857 -- # local i 00:05:05.531 04:02:19 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:05:05.531 04:02:19 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:05:05.532 04:02:19 -- common/autotest_common.sh@860 -- # grep -q -w nbd1 /proc/partitions 00:05:05.532 04:02:19 -- common/autotest_common.sh@861 -- # break 00:05:05.532 04:02:19 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:05:05.532 04:02:19 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:05:05.532 04:02:19 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/dsa-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:05.532 1+0 records in 00:05:05.532 1+0 records out 00:05:05.532 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000192443 s, 21.3 MB/s 00:05:05.532 04:02:19 -- common/autotest_common.sh@874 -- # stat -c %s /var/jenkins/workspace/dsa-phy-autotest/spdk/test/event/nbdtest 00:05:05.532 04:02:19 -- common/autotest_common.sh@874 -- # size=4096 00:05:05.532 04:02:19 -- common/autotest_common.sh@875 -- # rm -f /var/jenkins/workspace/dsa-phy-autotest/spdk/test/event/nbdtest 00:05:05.532 04:02:19 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:05:05.532 04:02:19 -- common/autotest_common.sh@877 -- # return 0 00:05:05.532 04:02:19 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:05.532 04:02:19 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:05.532 04:02:19 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:05.532 04:02:19 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:05.532 04:02:19 -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:05.532 04:02:20 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:05.532 { 00:05:05.532 "nbd_device": "/dev/nbd0", 00:05:05.532 "bdev_name": "Malloc0" 00:05:05.532 }, 00:05:05.532 { 00:05:05.532 "nbd_device": "/dev/nbd1", 00:05:05.532 "bdev_name": "Malloc1" 00:05:05.532 } 00:05:05.532 ]' 00:05:05.532 04:02:20 -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:05.532 { 00:05:05.532 "nbd_device": "/dev/nbd0", 00:05:05.532 "bdev_name": "Malloc0" 00:05:05.532 }, 00:05:05.532 { 00:05:05.532 "nbd_device": "/dev/nbd1", 00:05:05.532 "bdev_name": "Malloc1" 00:05:05.532 } 00:05:05.532 ]' 00:05:05.532 04:02:20 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:05.532 04:02:20 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:05.532 /dev/nbd1' 00:05:05.532 04:02:20 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:05.532 04:02:20 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:05.532 /dev/nbd1' 00:05:05.792 04:02:20 -- bdev/nbd_common.sh@65 -- # count=2 00:05:05.792 04:02:20 -- bdev/nbd_common.sh@66 -- # echo 2 00:05:05.792 04:02:20 -- bdev/nbd_common.sh@95 -- # count=2 00:05:05.792 04:02:20 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:05.792 04:02:20 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:05.792 04:02:20 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:05.792 04:02:20 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:05.792 04:02:20 -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:05.792 04:02:20 -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/dsa-phy-autotest/spdk/test/event/nbdrandtest 00:05:05.792 04:02:20 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:05.792 04:02:20 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/dsa-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:05.792 256+0 records in 00:05:05.792 256+0 records out 00:05:05.792 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00437439 s, 240 MB/s 00:05:05.792 04:02:20 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:05.792 04:02:20 -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/dsa-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:05.792 256+0 records in 00:05:05.792 256+0 records out 00:05:05.792 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0151055 s, 69.4 MB/s 00:05:05.792 04:02:20 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:05.792 04:02:20 -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/dsa-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:05.792 256+0 records in 00:05:05.792 256+0 records out 00:05:05.792 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0183382 s, 57.2 MB/s 00:05:05.792 04:02:20 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:05.792 04:02:20 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:05.792 04:02:20 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:05.792 04:02:20 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:05.792 04:02:20 -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/dsa-phy-autotest/spdk/test/event/nbdrandtest 00:05:05.792 04:02:20 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:05.792 04:02:20 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:05.792 04:02:20 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:05.792 04:02:20 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/dsa-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:05:05.792 04:02:20 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:05.792 04:02:20 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/dsa-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:05:05.792 04:02:20 -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/dsa-phy-autotest/spdk/test/event/nbdrandtest 00:05:05.792 04:02:20 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:05.792 04:02:20 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:05.792 04:02:20 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:05.792 04:02:20 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:05.792 04:02:20 -- bdev/nbd_common.sh@51 -- # local i 00:05:05.792 04:02:20 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:05.792 04:02:20 -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:05.792 04:02:20 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:05.792 04:02:20 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:05.792 04:02:20 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:05.792 04:02:20 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:05.792 04:02:20 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:05.792 04:02:20 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:05.792 04:02:20 -- bdev/nbd_common.sh@41 -- # break 00:05:05.792 04:02:20 -- bdev/nbd_common.sh@45 -- # return 0 00:05:05.792 04:02:20 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:05.792 04:02:20 -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:06.061 04:02:20 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:06.061 04:02:20 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:06.061 04:02:20 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:06.061 04:02:20 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:06.061 04:02:20 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:06.061 04:02:20 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:06.061 04:02:20 -- bdev/nbd_common.sh@41 -- # break 00:05:06.061 04:02:20 -- bdev/nbd_common.sh@45 -- # return 0 00:05:06.061 04:02:20 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:06.061 04:02:20 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:06.061 04:02:20 -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:06.061 04:02:20 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:06.061 04:02:20 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:06.061 04:02:20 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:06.321 04:02:20 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:06.321 04:02:20 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:06.321 04:02:20 -- bdev/nbd_common.sh@65 -- # echo '' 00:05:06.321 04:02:20 -- bdev/nbd_common.sh@65 -- # true 00:05:06.321 04:02:20 -- bdev/nbd_common.sh@65 -- # count=0 00:05:06.321 04:02:20 -- bdev/nbd_common.sh@66 -- # echo 0 00:05:06.321 04:02:20 -- bdev/nbd_common.sh@104 -- # count=0 00:05:06.321 04:02:20 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:06.321 04:02:20 -- bdev/nbd_common.sh@109 -- # return 0 00:05:06.321 04:02:20 -- event/event.sh@34 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:06.321 04:02:20 -- event/event.sh@35 -- # sleep 3 00:05:06.888 [2024-05-14 04:02:21.341868] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:06.888 [2024-05-14 04:02:21.425777] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:06.888 [2024-05-14 04:02:21.425782] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:07.148 [2024-05-14 04:02:21.499343] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:07.148 [2024-05-14 04:02:21.499379] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:09.689 04:02:23 -- event/event.sh@23 -- # for i in {0..2} 00:05:09.689 04:02:23 -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:05:09.689 spdk_app_start Round 1 00:05:09.689 04:02:23 -- event/event.sh@25 -- # waitforlisten 3799783 /var/tmp/spdk-nbd.sock 00:05:09.689 04:02:23 -- common/autotest_common.sh@819 -- # '[' -z 3799783 ']' 00:05:09.689 04:02:23 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:09.689 04:02:23 -- common/autotest_common.sh@824 -- # local max_retries=100 00:05:09.689 04:02:23 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:09.689 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:09.689 04:02:23 -- common/autotest_common.sh@828 -- # xtrace_disable 00:05:09.689 04:02:23 -- common/autotest_common.sh@10 -- # set +x 00:05:09.689 04:02:23 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:05:09.689 04:02:24 -- common/autotest_common.sh@852 -- # return 0 00:05:09.689 04:02:24 -- event/event.sh@27 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:09.689 Malloc0 00:05:09.689 04:02:24 -- event/event.sh@28 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:09.953 Malloc1 00:05:09.953 04:02:24 -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:09.953 04:02:24 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:09.953 04:02:24 -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:09.953 04:02:24 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:09.953 04:02:24 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:09.953 04:02:24 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:09.953 04:02:24 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:09.953 04:02:24 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:09.953 04:02:24 -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:09.953 04:02:24 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:09.953 04:02:24 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:09.953 04:02:24 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:09.953 04:02:24 -- bdev/nbd_common.sh@12 -- # local i 00:05:09.953 04:02:24 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:09.953 04:02:24 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:09.953 04:02:24 -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:09.953 /dev/nbd0 00:05:09.953 04:02:24 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:09.953 04:02:24 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:09.953 04:02:24 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:05:09.953 04:02:24 -- common/autotest_common.sh@857 -- # local i 00:05:09.953 04:02:24 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:05:09.953 04:02:24 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:05:09.953 04:02:24 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:05:09.953 04:02:24 -- common/autotest_common.sh@861 -- # break 00:05:09.953 04:02:24 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:05:09.953 04:02:24 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:05:09.953 04:02:24 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/dsa-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:09.953 1+0 records in 00:05:09.953 1+0 records out 00:05:09.953 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000310086 s, 13.2 MB/s 00:05:09.953 04:02:24 -- common/autotest_common.sh@874 -- # stat -c %s /var/jenkins/workspace/dsa-phy-autotest/spdk/test/event/nbdtest 00:05:09.953 04:02:24 -- common/autotest_common.sh@874 -- # size=4096 00:05:09.953 04:02:24 -- common/autotest_common.sh@875 -- # rm -f /var/jenkins/workspace/dsa-phy-autotest/spdk/test/event/nbdtest 00:05:09.953 04:02:24 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:05:09.953 04:02:24 -- common/autotest_common.sh@877 -- # return 0 00:05:09.953 04:02:24 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:09.953 04:02:24 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:09.953 04:02:24 -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:10.212 /dev/nbd1 00:05:10.212 04:02:24 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:10.212 04:02:24 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:10.212 04:02:24 -- common/autotest_common.sh@856 -- # local nbd_name=nbd1 00:05:10.212 04:02:24 -- common/autotest_common.sh@857 -- # local i 00:05:10.212 04:02:24 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:05:10.212 04:02:24 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:05:10.212 04:02:24 -- common/autotest_common.sh@860 -- # grep -q -w nbd1 /proc/partitions 00:05:10.212 04:02:24 -- common/autotest_common.sh@861 -- # break 00:05:10.212 04:02:24 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:05:10.212 04:02:24 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:05:10.212 04:02:24 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/dsa-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:10.212 1+0 records in 00:05:10.212 1+0 records out 00:05:10.212 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000232035 s, 17.7 MB/s 00:05:10.212 04:02:24 -- common/autotest_common.sh@874 -- # stat -c %s /var/jenkins/workspace/dsa-phy-autotest/spdk/test/event/nbdtest 00:05:10.212 04:02:24 -- common/autotest_common.sh@874 -- # size=4096 00:05:10.212 04:02:24 -- common/autotest_common.sh@875 -- # rm -f /var/jenkins/workspace/dsa-phy-autotest/spdk/test/event/nbdtest 00:05:10.212 04:02:24 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:05:10.212 04:02:24 -- common/autotest_common.sh@877 -- # return 0 00:05:10.212 04:02:24 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:10.212 04:02:24 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:10.212 04:02:24 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:10.212 04:02:24 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:10.212 04:02:24 -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:10.472 04:02:24 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:10.472 { 00:05:10.473 "nbd_device": "/dev/nbd0", 00:05:10.473 "bdev_name": "Malloc0" 00:05:10.473 }, 00:05:10.473 { 00:05:10.473 "nbd_device": "/dev/nbd1", 00:05:10.473 "bdev_name": "Malloc1" 00:05:10.473 } 00:05:10.473 ]' 00:05:10.473 04:02:24 -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:10.473 { 00:05:10.473 "nbd_device": "/dev/nbd0", 00:05:10.473 "bdev_name": "Malloc0" 00:05:10.473 }, 00:05:10.473 { 00:05:10.473 "nbd_device": "/dev/nbd1", 00:05:10.473 "bdev_name": "Malloc1" 00:05:10.473 } 00:05:10.473 ]' 00:05:10.473 04:02:24 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:10.473 04:02:24 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:10.473 /dev/nbd1' 00:05:10.473 04:02:24 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:10.473 /dev/nbd1' 00:05:10.473 04:02:24 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:10.473 04:02:24 -- bdev/nbd_common.sh@65 -- # count=2 00:05:10.473 04:02:24 -- bdev/nbd_common.sh@66 -- # echo 2 00:05:10.473 04:02:24 -- bdev/nbd_common.sh@95 -- # count=2 00:05:10.473 04:02:24 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:10.473 04:02:24 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:10.473 04:02:24 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:10.473 04:02:24 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:10.473 04:02:24 -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:10.473 04:02:24 -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/dsa-phy-autotest/spdk/test/event/nbdrandtest 00:05:10.473 04:02:24 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:10.473 04:02:24 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/dsa-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:10.473 256+0 records in 00:05:10.473 256+0 records out 00:05:10.473 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00433408 s, 242 MB/s 00:05:10.473 04:02:24 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:10.473 04:02:24 -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/dsa-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:10.473 256+0 records in 00:05:10.473 256+0 records out 00:05:10.473 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0138092 s, 75.9 MB/s 00:05:10.473 04:02:24 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:10.473 04:02:24 -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/dsa-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:10.473 256+0 records in 00:05:10.473 256+0 records out 00:05:10.473 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0150855 s, 69.5 MB/s 00:05:10.473 04:02:24 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:10.473 04:02:24 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:10.473 04:02:24 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:10.473 04:02:24 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:10.473 04:02:24 -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/dsa-phy-autotest/spdk/test/event/nbdrandtest 00:05:10.473 04:02:24 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:10.473 04:02:24 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:10.473 04:02:24 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:10.473 04:02:24 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/dsa-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:05:10.473 04:02:24 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:10.473 04:02:24 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/dsa-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:05:10.473 04:02:24 -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/dsa-phy-autotest/spdk/test/event/nbdrandtest 00:05:10.473 04:02:24 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:10.473 04:02:24 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:10.473 04:02:24 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:10.473 04:02:24 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:10.473 04:02:24 -- bdev/nbd_common.sh@51 -- # local i 00:05:10.473 04:02:24 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:10.473 04:02:24 -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:10.734 04:02:25 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:10.734 04:02:25 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:10.734 04:02:25 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:10.734 04:02:25 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:10.734 04:02:25 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:10.734 04:02:25 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:10.734 04:02:25 -- bdev/nbd_common.sh@41 -- # break 00:05:10.734 04:02:25 -- bdev/nbd_common.sh@45 -- # return 0 00:05:10.734 04:02:25 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:10.734 04:02:25 -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:10.734 04:02:25 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:10.734 04:02:25 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:10.734 04:02:25 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:10.734 04:02:25 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:10.734 04:02:25 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:10.734 04:02:25 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:10.734 04:02:25 -- bdev/nbd_common.sh@41 -- # break 00:05:10.734 04:02:25 -- bdev/nbd_common.sh@45 -- # return 0 00:05:10.734 04:02:25 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:10.734 04:02:25 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:10.734 04:02:25 -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:10.995 04:02:25 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:10.995 04:02:25 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:10.995 04:02:25 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:10.995 04:02:25 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:10.995 04:02:25 -- bdev/nbd_common.sh@65 -- # echo '' 00:05:10.995 04:02:25 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:10.995 04:02:25 -- bdev/nbd_common.sh@65 -- # true 00:05:10.995 04:02:25 -- bdev/nbd_common.sh@65 -- # count=0 00:05:10.995 04:02:25 -- bdev/nbd_common.sh@66 -- # echo 0 00:05:10.995 04:02:25 -- bdev/nbd_common.sh@104 -- # count=0 00:05:10.995 04:02:25 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:10.995 04:02:25 -- bdev/nbd_common.sh@109 -- # return 0 00:05:10.995 04:02:25 -- event/event.sh@34 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:11.256 04:02:25 -- event/event.sh@35 -- # sleep 3 00:05:11.827 [2024-05-14 04:02:26.158007] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:11.827 [2024-05-14 04:02:26.255386] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:11.827 [2024-05-14 04:02:26.255386] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:11.827 [2024-05-14 04:02:26.328212] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:11.827 [2024-05-14 04:02:26.328245] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:14.372 04:02:28 -- event/event.sh@23 -- # for i in {0..2} 00:05:14.372 04:02:28 -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:05:14.372 spdk_app_start Round 2 00:05:14.372 04:02:28 -- event/event.sh@25 -- # waitforlisten 3799783 /var/tmp/spdk-nbd.sock 00:05:14.372 04:02:28 -- common/autotest_common.sh@819 -- # '[' -z 3799783 ']' 00:05:14.372 04:02:28 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:14.372 04:02:28 -- common/autotest_common.sh@824 -- # local max_retries=100 00:05:14.372 04:02:28 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:14.372 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:14.372 04:02:28 -- common/autotest_common.sh@828 -- # xtrace_disable 00:05:14.372 04:02:28 -- common/autotest_common.sh@10 -- # set +x 00:05:14.372 04:02:28 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:05:14.372 04:02:28 -- common/autotest_common.sh@852 -- # return 0 00:05:14.372 04:02:28 -- event/event.sh@27 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:14.630 Malloc0 00:05:14.630 04:02:28 -- event/event.sh@28 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:14.630 Malloc1 00:05:14.630 04:02:29 -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:14.630 04:02:29 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:14.630 04:02:29 -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:14.630 04:02:29 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:14.630 04:02:29 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:14.630 04:02:29 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:14.630 04:02:29 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:14.630 04:02:29 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:14.630 04:02:29 -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:14.630 04:02:29 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:14.630 04:02:29 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:14.630 04:02:29 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:14.630 04:02:29 -- bdev/nbd_common.sh@12 -- # local i 00:05:14.630 04:02:29 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:14.630 04:02:29 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:14.630 04:02:29 -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:14.891 /dev/nbd0 00:05:14.891 04:02:29 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:14.891 04:02:29 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:14.891 04:02:29 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:05:14.891 04:02:29 -- common/autotest_common.sh@857 -- # local i 00:05:14.891 04:02:29 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:05:14.891 04:02:29 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:05:14.891 04:02:29 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:05:14.891 04:02:29 -- common/autotest_common.sh@861 -- # break 00:05:14.891 04:02:29 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:05:14.891 04:02:29 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:05:14.891 04:02:29 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/dsa-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:14.891 1+0 records in 00:05:14.891 1+0 records out 00:05:14.891 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000225015 s, 18.2 MB/s 00:05:14.891 04:02:29 -- common/autotest_common.sh@874 -- # stat -c %s /var/jenkins/workspace/dsa-phy-autotest/spdk/test/event/nbdtest 00:05:14.891 04:02:29 -- common/autotest_common.sh@874 -- # size=4096 00:05:14.891 04:02:29 -- common/autotest_common.sh@875 -- # rm -f /var/jenkins/workspace/dsa-phy-autotest/spdk/test/event/nbdtest 00:05:14.891 04:02:29 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:05:14.891 04:02:29 -- common/autotest_common.sh@877 -- # return 0 00:05:14.891 04:02:29 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:14.891 04:02:29 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:14.891 04:02:29 -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:14.891 /dev/nbd1 00:05:15.150 04:02:29 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:15.150 04:02:29 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:15.150 04:02:29 -- common/autotest_common.sh@856 -- # local nbd_name=nbd1 00:05:15.150 04:02:29 -- common/autotest_common.sh@857 -- # local i 00:05:15.150 04:02:29 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:05:15.150 04:02:29 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:05:15.150 04:02:29 -- common/autotest_common.sh@860 -- # grep -q -w nbd1 /proc/partitions 00:05:15.150 04:02:29 -- common/autotest_common.sh@861 -- # break 00:05:15.150 04:02:29 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:05:15.150 04:02:29 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:05:15.150 04:02:29 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/dsa-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:15.150 1+0 records in 00:05:15.150 1+0 records out 00:05:15.150 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00026047 s, 15.7 MB/s 00:05:15.150 04:02:29 -- common/autotest_common.sh@874 -- # stat -c %s /var/jenkins/workspace/dsa-phy-autotest/spdk/test/event/nbdtest 00:05:15.150 04:02:29 -- common/autotest_common.sh@874 -- # size=4096 00:05:15.150 04:02:29 -- common/autotest_common.sh@875 -- # rm -f /var/jenkins/workspace/dsa-phy-autotest/spdk/test/event/nbdtest 00:05:15.150 04:02:29 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:05:15.150 04:02:29 -- common/autotest_common.sh@877 -- # return 0 00:05:15.150 04:02:29 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:15.150 04:02:29 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:15.150 04:02:29 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:15.150 04:02:29 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:15.150 04:02:29 -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:15.150 04:02:29 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:15.150 { 00:05:15.150 "nbd_device": "/dev/nbd0", 00:05:15.150 "bdev_name": "Malloc0" 00:05:15.150 }, 00:05:15.150 { 00:05:15.150 "nbd_device": "/dev/nbd1", 00:05:15.150 "bdev_name": "Malloc1" 00:05:15.150 } 00:05:15.150 ]' 00:05:15.150 04:02:29 -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:15.150 { 00:05:15.150 "nbd_device": "/dev/nbd0", 00:05:15.150 "bdev_name": "Malloc0" 00:05:15.150 }, 00:05:15.150 { 00:05:15.150 "nbd_device": "/dev/nbd1", 00:05:15.150 "bdev_name": "Malloc1" 00:05:15.150 } 00:05:15.150 ]' 00:05:15.150 04:02:29 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:15.150 04:02:29 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:15.150 /dev/nbd1' 00:05:15.150 04:02:29 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:15.150 /dev/nbd1' 00:05:15.150 04:02:29 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:15.150 04:02:29 -- bdev/nbd_common.sh@65 -- # count=2 00:05:15.150 04:02:29 -- bdev/nbd_common.sh@66 -- # echo 2 00:05:15.150 04:02:29 -- bdev/nbd_common.sh@95 -- # count=2 00:05:15.150 04:02:29 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:15.150 04:02:29 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:15.150 04:02:29 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:15.150 04:02:29 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:15.150 04:02:29 -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:15.150 04:02:29 -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/dsa-phy-autotest/spdk/test/event/nbdrandtest 00:05:15.151 04:02:29 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:15.151 04:02:29 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/dsa-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:15.151 256+0 records in 00:05:15.151 256+0 records out 00:05:15.151 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00452536 s, 232 MB/s 00:05:15.151 04:02:29 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:15.151 04:02:29 -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/dsa-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:15.151 256+0 records in 00:05:15.151 256+0 records out 00:05:15.151 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0148343 s, 70.7 MB/s 00:05:15.410 04:02:29 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:15.410 04:02:29 -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/dsa-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:15.410 256+0 records in 00:05:15.410 256+0 records out 00:05:15.410 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0162289 s, 64.6 MB/s 00:05:15.410 04:02:29 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:15.410 04:02:29 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:15.410 04:02:29 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:15.410 04:02:29 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:15.410 04:02:29 -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/dsa-phy-autotest/spdk/test/event/nbdrandtest 00:05:15.410 04:02:29 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:15.410 04:02:29 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:15.410 04:02:29 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:15.410 04:02:29 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/dsa-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:05:15.410 04:02:29 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:15.410 04:02:29 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/dsa-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:05:15.410 04:02:29 -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/dsa-phy-autotest/spdk/test/event/nbdrandtest 00:05:15.411 04:02:29 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:15.411 04:02:29 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:15.411 04:02:29 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:15.411 04:02:29 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:15.411 04:02:29 -- bdev/nbd_common.sh@51 -- # local i 00:05:15.411 04:02:29 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:15.411 04:02:29 -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:15.411 04:02:29 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:15.411 04:02:29 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:15.411 04:02:29 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:15.411 04:02:29 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:15.411 04:02:29 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:15.411 04:02:29 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:15.411 04:02:29 -- bdev/nbd_common.sh@41 -- # break 00:05:15.411 04:02:29 -- bdev/nbd_common.sh@45 -- # return 0 00:05:15.411 04:02:29 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:15.411 04:02:29 -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:15.671 04:02:30 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:15.671 04:02:30 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:15.671 04:02:30 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:15.671 04:02:30 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:15.671 04:02:30 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:15.671 04:02:30 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:15.671 04:02:30 -- bdev/nbd_common.sh@41 -- # break 00:05:15.671 04:02:30 -- bdev/nbd_common.sh@45 -- # return 0 00:05:15.671 04:02:30 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:15.671 04:02:30 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:15.671 04:02:30 -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:15.672 04:02:30 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:15.672 04:02:30 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:15.672 04:02:30 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:15.967 04:02:30 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:15.967 04:02:30 -- bdev/nbd_common.sh@65 -- # echo '' 00:05:15.967 04:02:30 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:15.967 04:02:30 -- bdev/nbd_common.sh@65 -- # true 00:05:15.967 04:02:30 -- bdev/nbd_common.sh@65 -- # count=0 00:05:15.967 04:02:30 -- bdev/nbd_common.sh@66 -- # echo 0 00:05:15.967 04:02:30 -- bdev/nbd_common.sh@104 -- # count=0 00:05:15.967 04:02:30 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:15.967 04:02:30 -- bdev/nbd_common.sh@109 -- # return 0 00:05:15.967 04:02:30 -- event/event.sh@34 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:15.967 04:02:30 -- event/event.sh@35 -- # sleep 3 00:05:16.545 [2024-05-14 04:02:30.967621] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:16.545 [2024-05-14 04:02:31.051422] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:16.545 [2024-05-14 04:02:31.051427] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:16.545 [2024-05-14 04:02:31.124956] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:16.545 [2024-05-14 04:02:31.124994] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:19.093 04:02:33 -- event/event.sh@38 -- # waitforlisten 3799783 /var/tmp/spdk-nbd.sock 00:05:19.093 04:02:33 -- common/autotest_common.sh@819 -- # '[' -z 3799783 ']' 00:05:19.093 04:02:33 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:19.093 04:02:33 -- common/autotest_common.sh@824 -- # local max_retries=100 00:05:19.093 04:02:33 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:19.093 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:19.093 04:02:33 -- common/autotest_common.sh@828 -- # xtrace_disable 00:05:19.093 04:02:33 -- common/autotest_common.sh@10 -- # set +x 00:05:19.093 04:02:33 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:05:19.093 04:02:33 -- common/autotest_common.sh@852 -- # return 0 00:05:19.093 04:02:33 -- event/event.sh@39 -- # killprocess 3799783 00:05:19.093 04:02:33 -- common/autotest_common.sh@926 -- # '[' -z 3799783 ']' 00:05:19.093 04:02:33 -- common/autotest_common.sh@930 -- # kill -0 3799783 00:05:19.093 04:02:33 -- common/autotest_common.sh@931 -- # uname 00:05:19.093 04:02:33 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:05:19.093 04:02:33 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3799783 00:05:19.094 04:02:33 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:05:19.353 04:02:33 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:05:19.353 04:02:33 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3799783' 00:05:19.353 killing process with pid 3799783 00:05:19.353 04:02:33 -- common/autotest_common.sh@945 -- # kill 3799783 00:05:19.353 04:02:33 -- common/autotest_common.sh@950 -- # wait 3799783 00:05:19.613 spdk_app_start is called in Round 0. 00:05:19.613 Shutdown signal received, stop current app iteration 00:05:19.613 Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 reinitialization... 00:05:19.613 spdk_app_start is called in Round 1. 00:05:19.613 Shutdown signal received, stop current app iteration 00:05:19.613 Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 reinitialization... 00:05:19.613 spdk_app_start is called in Round 2. 00:05:19.613 Shutdown signal received, stop current app iteration 00:05:19.613 Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 reinitialization... 00:05:19.613 spdk_app_start is called in Round 3. 00:05:19.613 Shutdown signal received, stop current app iteration 00:05:19.613 04:02:34 -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:05:19.613 04:02:34 -- event/event.sh@42 -- # return 0 00:05:19.613 00:05:19.613 real 0m15.692s 00:05:19.613 user 0m32.914s 00:05:19.613 sys 0m2.010s 00:05:19.613 04:02:34 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:19.613 04:02:34 -- common/autotest_common.sh@10 -- # set +x 00:05:19.613 ************************************ 00:05:19.613 END TEST app_repeat 00:05:19.613 ************************************ 00:05:19.613 04:02:34 -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:05:19.613 04:02:34 -- event/event.sh@55 -- # run_test cpu_locks /var/jenkins/workspace/dsa-phy-autotest/spdk/test/event/cpu_locks.sh 00:05:19.613 04:02:34 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:19.613 04:02:34 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:19.613 04:02:34 -- common/autotest_common.sh@10 -- # set +x 00:05:19.613 ************************************ 00:05:19.613 START TEST cpu_locks 00:05:19.613 ************************************ 00:05:19.613 04:02:34 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/event/cpu_locks.sh 00:05:19.874 * Looking for test storage... 00:05:19.874 * Found test storage at /var/jenkins/workspace/dsa-phy-autotest/spdk/test/event 00:05:19.874 04:02:34 -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:05:19.874 04:02:34 -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:05:19.874 04:02:34 -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:05:19.874 04:02:34 -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:05:19.874 04:02:34 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:19.874 04:02:34 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:19.874 04:02:34 -- common/autotest_common.sh@10 -- # set +x 00:05:19.874 ************************************ 00:05:19.874 START TEST default_locks 00:05:19.874 ************************************ 00:05:19.874 04:02:34 -- common/autotest_common.sh@1104 -- # default_locks 00:05:19.874 04:02:34 -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=3803171 00:05:19.874 04:02:34 -- event/cpu_locks.sh@47 -- # waitforlisten 3803171 00:05:19.874 04:02:34 -- common/autotest_common.sh@819 -- # '[' -z 3803171 ']' 00:05:19.874 04:02:34 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:19.874 04:02:34 -- common/autotest_common.sh@824 -- # local max_retries=100 00:05:19.874 04:02:34 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:19.874 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:19.874 04:02:34 -- common/autotest_common.sh@828 -- # xtrace_disable 00:05:19.874 04:02:34 -- common/autotest_common.sh@10 -- # set +x 00:05:19.874 04:02:34 -- event/cpu_locks.sh@45 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:19.874 [2024-05-14 04:02:34.318811] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:05:19.874 [2024-05-14 04:02:34.318963] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3803171 ] 00:05:19.874 EAL: No free 2048 kB hugepages reported on node 1 00:05:19.874 [2024-05-14 04:02:34.451044] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:20.134 [2024-05-14 04:02:34.541499] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:20.134 [2024-05-14 04:02:34.541708] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:20.705 04:02:35 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:05:20.705 04:02:35 -- common/autotest_common.sh@852 -- # return 0 00:05:20.705 04:02:35 -- event/cpu_locks.sh@49 -- # locks_exist 3803171 00:05:20.705 04:02:35 -- event/cpu_locks.sh@22 -- # lslocks -p 3803171 00:05:20.705 04:02:35 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:20.705 lslocks: write error 00:05:20.705 04:02:35 -- event/cpu_locks.sh@50 -- # killprocess 3803171 00:05:20.706 04:02:35 -- common/autotest_common.sh@926 -- # '[' -z 3803171 ']' 00:05:20.706 04:02:35 -- common/autotest_common.sh@930 -- # kill -0 3803171 00:05:20.706 04:02:35 -- common/autotest_common.sh@931 -- # uname 00:05:20.706 04:02:35 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:05:20.706 04:02:35 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3803171 00:05:20.706 04:02:35 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:05:20.706 04:02:35 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:05:20.706 04:02:35 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3803171' 00:05:20.706 killing process with pid 3803171 00:05:20.706 04:02:35 -- common/autotest_common.sh@945 -- # kill 3803171 00:05:20.706 04:02:35 -- common/autotest_common.sh@950 -- # wait 3803171 00:05:21.646 04:02:36 -- event/cpu_locks.sh@52 -- # NOT waitforlisten 3803171 00:05:21.646 04:02:36 -- common/autotest_common.sh@640 -- # local es=0 00:05:21.646 04:02:36 -- common/autotest_common.sh@642 -- # valid_exec_arg waitforlisten 3803171 00:05:21.646 04:02:36 -- common/autotest_common.sh@628 -- # local arg=waitforlisten 00:05:21.646 04:02:36 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:05:21.646 04:02:36 -- common/autotest_common.sh@632 -- # type -t waitforlisten 00:05:21.646 04:02:36 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:05:21.646 04:02:36 -- common/autotest_common.sh@643 -- # waitforlisten 3803171 00:05:21.646 04:02:36 -- common/autotest_common.sh@819 -- # '[' -z 3803171 ']' 00:05:21.646 04:02:36 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:21.646 04:02:36 -- common/autotest_common.sh@824 -- # local max_retries=100 00:05:21.646 04:02:36 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:21.646 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:21.646 04:02:36 -- common/autotest_common.sh@828 -- # xtrace_disable 00:05:21.646 04:02:36 -- common/autotest_common.sh@10 -- # set +x 00:05:21.646 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/common/autotest_common.sh: line 834: kill: (3803171) - No such process 00:05:21.646 ERROR: process (pid: 3803171) is no longer running 00:05:21.646 04:02:36 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:05:21.646 04:02:36 -- common/autotest_common.sh@852 -- # return 1 00:05:21.646 04:02:36 -- common/autotest_common.sh@643 -- # es=1 00:05:21.646 04:02:36 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:05:21.646 04:02:36 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:05:21.646 04:02:36 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:05:21.646 04:02:36 -- event/cpu_locks.sh@54 -- # no_locks 00:05:21.646 04:02:36 -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:21.646 04:02:36 -- event/cpu_locks.sh@26 -- # local lock_files 00:05:21.646 04:02:36 -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:21.646 00:05:21.646 real 0m1.909s 00:05:21.646 user 0m1.831s 00:05:21.646 sys 0m0.543s 00:05:21.646 04:02:36 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:21.646 04:02:36 -- common/autotest_common.sh@10 -- # set +x 00:05:21.646 ************************************ 00:05:21.646 END TEST default_locks 00:05:21.646 ************************************ 00:05:21.646 04:02:36 -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:05:21.646 04:02:36 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:21.646 04:02:36 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:21.646 04:02:36 -- common/autotest_common.sh@10 -- # set +x 00:05:21.646 ************************************ 00:05:21.646 START TEST default_locks_via_rpc 00:05:21.646 ************************************ 00:05:21.646 04:02:36 -- common/autotest_common.sh@1104 -- # default_locks_via_rpc 00:05:21.646 04:02:36 -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=3803510 00:05:21.646 04:02:36 -- event/cpu_locks.sh@63 -- # waitforlisten 3803510 00:05:21.646 04:02:36 -- common/autotest_common.sh@819 -- # '[' -z 3803510 ']' 00:05:21.646 04:02:36 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:21.646 04:02:36 -- common/autotest_common.sh@824 -- # local max_retries=100 00:05:21.646 04:02:36 -- event/cpu_locks.sh@61 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:21.646 04:02:36 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:21.646 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:21.646 04:02:36 -- common/autotest_common.sh@828 -- # xtrace_disable 00:05:21.646 04:02:36 -- common/autotest_common.sh@10 -- # set +x 00:05:21.907 [2024-05-14 04:02:36.268189] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:05:21.907 [2024-05-14 04:02:36.268338] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3803510 ] 00:05:21.907 EAL: No free 2048 kB hugepages reported on node 1 00:05:21.907 [2024-05-14 04:02:36.399371] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:21.907 [2024-05-14 04:02:36.490630] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:21.907 [2024-05-14 04:02:36.490834] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:22.478 04:02:36 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:05:22.478 04:02:36 -- common/autotest_common.sh@852 -- # return 0 00:05:22.478 04:02:36 -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:05:22.478 04:02:36 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:22.478 04:02:36 -- common/autotest_common.sh@10 -- # set +x 00:05:22.478 04:02:36 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:22.478 04:02:36 -- event/cpu_locks.sh@67 -- # no_locks 00:05:22.478 04:02:36 -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:22.478 04:02:36 -- event/cpu_locks.sh@26 -- # local lock_files 00:05:22.478 04:02:36 -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:22.478 04:02:36 -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:05:22.478 04:02:36 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:22.478 04:02:36 -- common/autotest_common.sh@10 -- # set +x 00:05:22.478 04:02:36 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:22.478 04:02:36 -- event/cpu_locks.sh@71 -- # locks_exist 3803510 00:05:22.478 04:02:36 -- event/cpu_locks.sh@22 -- # lslocks -p 3803510 00:05:22.478 04:02:36 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:22.738 04:02:37 -- event/cpu_locks.sh@73 -- # killprocess 3803510 00:05:22.738 04:02:37 -- common/autotest_common.sh@926 -- # '[' -z 3803510 ']' 00:05:22.738 04:02:37 -- common/autotest_common.sh@930 -- # kill -0 3803510 00:05:22.738 04:02:37 -- common/autotest_common.sh@931 -- # uname 00:05:22.738 04:02:37 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:05:22.738 04:02:37 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3803510 00:05:22.738 04:02:37 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:05:22.738 04:02:37 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:05:22.738 04:02:37 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3803510' 00:05:22.738 killing process with pid 3803510 00:05:22.738 04:02:37 -- common/autotest_common.sh@945 -- # kill 3803510 00:05:22.738 04:02:37 -- common/autotest_common.sh@950 -- # wait 3803510 00:05:23.678 00:05:23.678 real 0m1.887s 00:05:23.678 user 0m1.820s 00:05:23.678 sys 0m0.519s 00:05:23.678 04:02:38 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:23.678 04:02:38 -- common/autotest_common.sh@10 -- # set +x 00:05:23.678 ************************************ 00:05:23.678 END TEST default_locks_via_rpc 00:05:23.678 ************************************ 00:05:23.678 04:02:38 -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:05:23.678 04:02:38 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:23.678 04:02:38 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:23.678 04:02:38 -- common/autotest_common.sh@10 -- # set +x 00:05:23.678 ************************************ 00:05:23.678 START TEST non_locking_app_on_locked_coremask 00:05:23.678 ************************************ 00:05:23.678 04:02:38 -- common/autotest_common.sh@1104 -- # non_locking_app_on_locked_coremask 00:05:23.678 04:02:38 -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=3803869 00:05:23.678 04:02:38 -- event/cpu_locks.sh@81 -- # waitforlisten 3803869 /var/tmp/spdk.sock 00:05:23.678 04:02:38 -- common/autotest_common.sh@819 -- # '[' -z 3803869 ']' 00:05:23.678 04:02:38 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:23.678 04:02:38 -- common/autotest_common.sh@824 -- # local max_retries=100 00:05:23.678 04:02:38 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:23.678 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:23.678 04:02:38 -- common/autotest_common.sh@828 -- # xtrace_disable 00:05:23.678 04:02:38 -- common/autotest_common.sh@10 -- # set +x 00:05:23.678 04:02:38 -- event/cpu_locks.sh@79 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:23.678 [2024-05-14 04:02:38.172051] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:05:23.678 [2024-05-14 04:02:38.172176] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3803869 ] 00:05:23.678 EAL: No free 2048 kB hugepages reported on node 1 00:05:23.937 [2024-05-14 04:02:38.287261] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:23.937 [2024-05-14 04:02:38.384787] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:23.937 [2024-05-14 04:02:38.385019] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:24.506 04:02:38 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:05:24.506 04:02:38 -- common/autotest_common.sh@852 -- # return 0 00:05:24.506 04:02:38 -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=3804142 00:05:24.506 04:02:38 -- event/cpu_locks.sh@85 -- # waitforlisten 3804142 /var/tmp/spdk2.sock 00:05:24.506 04:02:38 -- common/autotest_common.sh@819 -- # '[' -z 3804142 ']' 00:05:24.506 04:02:38 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:24.506 04:02:38 -- common/autotest_common.sh@824 -- # local max_retries=100 00:05:24.506 04:02:38 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:24.506 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:24.506 04:02:38 -- common/autotest_common.sh@828 -- # xtrace_disable 00:05:24.506 04:02:38 -- common/autotest_common.sh@10 -- # set +x 00:05:24.506 04:02:38 -- event/cpu_locks.sh@83 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:05:24.507 [2024-05-14 04:02:38.957542] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:05:24.507 [2024-05-14 04:02:38.957660] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3804142 ] 00:05:24.507 EAL: No free 2048 kB hugepages reported on node 1 00:05:24.764 [2024-05-14 04:02:39.108652] app.c: 795:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:24.764 [2024-05-14 04:02:39.108690] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:24.764 [2024-05-14 04:02:39.292236] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:24.764 [2024-05-14 04:02:39.292412] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:26.142 04:02:40 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:05:26.142 04:02:40 -- common/autotest_common.sh@852 -- # return 0 00:05:26.142 04:02:40 -- event/cpu_locks.sh@87 -- # locks_exist 3803869 00:05:26.142 04:02:40 -- event/cpu_locks.sh@22 -- # lslocks -p 3803869 00:05:26.142 04:02:40 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:26.142 lslocks: write error 00:05:26.142 04:02:40 -- event/cpu_locks.sh@89 -- # killprocess 3803869 00:05:26.142 04:02:40 -- common/autotest_common.sh@926 -- # '[' -z 3803869 ']' 00:05:26.142 04:02:40 -- common/autotest_common.sh@930 -- # kill -0 3803869 00:05:26.142 04:02:40 -- common/autotest_common.sh@931 -- # uname 00:05:26.142 04:02:40 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:05:26.142 04:02:40 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3803869 00:05:26.142 04:02:40 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:05:26.142 04:02:40 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:05:26.142 04:02:40 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3803869' 00:05:26.142 killing process with pid 3803869 00:05:26.142 04:02:40 -- common/autotest_common.sh@945 -- # kill 3803869 00:05:26.142 04:02:40 -- common/autotest_common.sh@950 -- # wait 3803869 00:05:28.045 04:02:42 -- event/cpu_locks.sh@90 -- # killprocess 3804142 00:05:28.045 04:02:42 -- common/autotest_common.sh@926 -- # '[' -z 3804142 ']' 00:05:28.045 04:02:42 -- common/autotest_common.sh@930 -- # kill -0 3804142 00:05:28.045 04:02:42 -- common/autotest_common.sh@931 -- # uname 00:05:28.045 04:02:42 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:05:28.045 04:02:42 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3804142 00:05:28.046 04:02:42 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:05:28.046 04:02:42 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:05:28.046 04:02:42 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3804142' 00:05:28.046 killing process with pid 3804142 00:05:28.046 04:02:42 -- common/autotest_common.sh@945 -- # kill 3804142 00:05:28.046 04:02:42 -- common/autotest_common.sh@950 -- # wait 3804142 00:05:28.613 00:05:28.613 real 0m5.066s 00:05:28.613 user 0m5.222s 00:05:28.613 sys 0m1.010s 00:05:28.613 04:02:43 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:28.613 04:02:43 -- common/autotest_common.sh@10 -- # set +x 00:05:28.613 ************************************ 00:05:28.613 END TEST non_locking_app_on_locked_coremask 00:05:28.613 ************************************ 00:05:28.613 04:02:43 -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:05:28.613 04:02:43 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:28.613 04:02:43 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:28.613 04:02:43 -- common/autotest_common.sh@10 -- # set +x 00:05:28.613 ************************************ 00:05:28.613 START TEST locking_app_on_unlocked_coremask 00:05:28.613 ************************************ 00:05:28.613 04:02:43 -- common/autotest_common.sh@1104 -- # locking_app_on_unlocked_coremask 00:05:28.613 04:02:43 -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=3805083 00:05:28.613 04:02:43 -- event/cpu_locks.sh@99 -- # waitforlisten 3805083 /var/tmp/spdk.sock 00:05:28.613 04:02:43 -- common/autotest_common.sh@819 -- # '[' -z 3805083 ']' 00:05:28.613 04:02:43 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:28.613 04:02:43 -- common/autotest_common.sh@824 -- # local max_retries=100 00:05:28.613 04:02:43 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:28.613 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:28.613 04:02:43 -- event/cpu_locks.sh@97 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:05:28.613 04:02:43 -- common/autotest_common.sh@828 -- # xtrace_disable 00:05:28.613 04:02:43 -- common/autotest_common.sh@10 -- # set +x 00:05:28.874 [2024-05-14 04:02:43.278916] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:05:28.874 [2024-05-14 04:02:43.279058] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3805083 ] 00:05:28.874 EAL: No free 2048 kB hugepages reported on node 1 00:05:28.874 [2024-05-14 04:02:43.411490] app.c: 795:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:28.874 [2024-05-14 04:02:43.411533] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:29.136 [2024-05-14 04:02:43.500501] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:29.136 [2024-05-14 04:02:43.500706] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:29.396 04:02:43 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:05:29.396 04:02:43 -- common/autotest_common.sh@852 -- # return 0 00:05:29.396 04:02:43 -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=3805096 00:05:29.396 04:02:43 -- event/cpu_locks.sh@103 -- # waitforlisten 3805096 /var/tmp/spdk2.sock 00:05:29.396 04:02:43 -- common/autotest_common.sh@819 -- # '[' -z 3805096 ']' 00:05:29.396 04:02:43 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:29.396 04:02:43 -- common/autotest_common.sh@824 -- # local max_retries=100 00:05:29.396 04:02:43 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:29.396 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:29.396 04:02:43 -- common/autotest_common.sh@828 -- # xtrace_disable 00:05:29.396 04:02:43 -- common/autotest_common.sh@10 -- # set +x 00:05:29.396 04:02:43 -- event/cpu_locks.sh@101 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:05:29.654 [2024-05-14 04:02:44.073710] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:05:29.654 [2024-05-14 04:02:44.073814] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3805096 ] 00:05:29.654 EAL: No free 2048 kB hugepages reported on node 1 00:05:29.654 [2024-05-14 04:02:44.212890] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:29.912 [2024-05-14 04:02:44.398766] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:29.912 [2024-05-14 04:02:44.398955] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:30.848 04:02:45 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:05:30.848 04:02:45 -- common/autotest_common.sh@852 -- # return 0 00:05:30.848 04:02:45 -- event/cpu_locks.sh@105 -- # locks_exist 3805096 00:05:30.848 04:02:45 -- event/cpu_locks.sh@22 -- # lslocks -p 3805096 00:05:30.848 04:02:45 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:31.107 lslocks: write error 00:05:31.107 04:02:45 -- event/cpu_locks.sh@107 -- # killprocess 3805083 00:05:31.107 04:02:45 -- common/autotest_common.sh@926 -- # '[' -z 3805083 ']' 00:05:31.107 04:02:45 -- common/autotest_common.sh@930 -- # kill -0 3805083 00:05:31.107 04:02:45 -- common/autotest_common.sh@931 -- # uname 00:05:31.107 04:02:45 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:05:31.107 04:02:45 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3805083 00:05:31.107 04:02:45 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:05:31.107 04:02:45 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:05:31.107 04:02:45 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3805083' 00:05:31.107 killing process with pid 3805083 00:05:31.107 04:02:45 -- common/autotest_common.sh@945 -- # kill 3805083 00:05:31.107 04:02:45 -- common/autotest_common.sh@950 -- # wait 3805083 00:05:33.019 04:02:47 -- event/cpu_locks.sh@108 -- # killprocess 3805096 00:05:33.019 04:02:47 -- common/autotest_common.sh@926 -- # '[' -z 3805096 ']' 00:05:33.019 04:02:47 -- common/autotest_common.sh@930 -- # kill -0 3805096 00:05:33.019 04:02:47 -- common/autotest_common.sh@931 -- # uname 00:05:33.019 04:02:47 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:05:33.019 04:02:47 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3805096 00:05:33.019 04:02:47 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:05:33.019 04:02:47 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:05:33.019 04:02:47 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3805096' 00:05:33.019 killing process with pid 3805096 00:05:33.019 04:02:47 -- common/autotest_common.sh@945 -- # kill 3805096 00:05:33.019 04:02:47 -- common/autotest_common.sh@950 -- # wait 3805096 00:05:33.624 00:05:33.624 real 0m4.942s 00:05:33.624 user 0m5.058s 00:05:33.624 sys 0m0.903s 00:05:33.624 04:02:48 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:33.624 04:02:48 -- common/autotest_common.sh@10 -- # set +x 00:05:33.624 ************************************ 00:05:33.624 END TEST locking_app_on_unlocked_coremask 00:05:33.624 ************************************ 00:05:33.624 04:02:48 -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:05:33.624 04:02:48 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:33.624 04:02:48 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:33.624 04:02:48 -- common/autotest_common.sh@10 -- # set +x 00:05:33.624 ************************************ 00:05:33.624 START TEST locking_app_on_locked_coremask 00:05:33.624 ************************************ 00:05:33.624 04:02:48 -- common/autotest_common.sh@1104 -- # locking_app_on_locked_coremask 00:05:33.624 04:02:48 -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=3806038 00:05:33.624 04:02:48 -- event/cpu_locks.sh@116 -- # waitforlisten 3806038 /var/tmp/spdk.sock 00:05:33.624 04:02:48 -- common/autotest_common.sh@819 -- # '[' -z 3806038 ']' 00:05:33.624 04:02:48 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:33.624 04:02:48 -- common/autotest_common.sh@824 -- # local max_retries=100 00:05:33.624 04:02:48 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:33.624 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:33.624 04:02:48 -- common/autotest_common.sh@828 -- # xtrace_disable 00:05:33.624 04:02:48 -- event/cpu_locks.sh@114 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:33.624 04:02:48 -- common/autotest_common.sh@10 -- # set +x 00:05:33.885 [2024-05-14 04:02:48.249606] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:05:33.885 [2024-05-14 04:02:48.249733] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3806038 ] 00:05:33.885 EAL: No free 2048 kB hugepages reported on node 1 00:05:33.885 [2024-05-14 04:02:48.368097] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:33.885 [2024-05-14 04:02:48.460263] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:33.885 [2024-05-14 04:02:48.460436] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:34.456 04:02:48 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:05:34.456 04:02:48 -- common/autotest_common.sh@852 -- # return 0 00:05:34.456 04:02:48 -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=3806219 00:05:34.456 04:02:48 -- event/cpu_locks.sh@120 -- # NOT waitforlisten 3806219 /var/tmp/spdk2.sock 00:05:34.456 04:02:48 -- common/autotest_common.sh@640 -- # local es=0 00:05:34.456 04:02:48 -- common/autotest_common.sh@642 -- # valid_exec_arg waitforlisten 3806219 /var/tmp/spdk2.sock 00:05:34.456 04:02:48 -- event/cpu_locks.sh@118 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:05:34.456 04:02:48 -- common/autotest_common.sh@628 -- # local arg=waitforlisten 00:05:34.456 04:02:48 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:05:34.456 04:02:48 -- common/autotest_common.sh@632 -- # type -t waitforlisten 00:05:34.456 04:02:48 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:05:34.456 04:02:48 -- common/autotest_common.sh@643 -- # waitforlisten 3806219 /var/tmp/spdk2.sock 00:05:34.456 04:02:48 -- common/autotest_common.sh@819 -- # '[' -z 3806219 ']' 00:05:34.456 04:02:48 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:34.456 04:02:48 -- common/autotest_common.sh@824 -- # local max_retries=100 00:05:34.456 04:02:48 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:34.456 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:34.456 04:02:48 -- common/autotest_common.sh@828 -- # xtrace_disable 00:05:34.456 04:02:48 -- common/autotest_common.sh@10 -- # set +x 00:05:34.456 [2024-05-14 04:02:49.040775] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:05:34.456 [2024-05-14 04:02:49.040892] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3806219 ] 00:05:34.716 EAL: No free 2048 kB hugepages reported on node 1 00:05:34.716 [2024-05-14 04:02:49.195398] app.c: 665:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 3806038 has claimed it. 00:05:34.716 [2024-05-14 04:02:49.195444] app.c: 791:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:05:35.288 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/common/autotest_common.sh: line 834: kill: (3806219) - No such process 00:05:35.288 ERROR: process (pid: 3806219) is no longer running 00:05:35.288 04:02:49 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:05:35.288 04:02:49 -- common/autotest_common.sh@852 -- # return 1 00:05:35.288 04:02:49 -- common/autotest_common.sh@643 -- # es=1 00:05:35.288 04:02:49 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:05:35.288 04:02:49 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:05:35.288 04:02:49 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:05:35.288 04:02:49 -- event/cpu_locks.sh@122 -- # locks_exist 3806038 00:05:35.288 04:02:49 -- event/cpu_locks.sh@22 -- # lslocks -p 3806038 00:05:35.288 04:02:49 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:35.288 lslocks: write error 00:05:35.288 04:02:49 -- event/cpu_locks.sh@124 -- # killprocess 3806038 00:05:35.288 04:02:49 -- common/autotest_common.sh@926 -- # '[' -z 3806038 ']' 00:05:35.288 04:02:49 -- common/autotest_common.sh@930 -- # kill -0 3806038 00:05:35.288 04:02:49 -- common/autotest_common.sh@931 -- # uname 00:05:35.288 04:02:49 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:05:35.288 04:02:49 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3806038 00:05:35.288 04:02:49 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:05:35.288 04:02:49 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:05:35.288 04:02:49 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3806038' 00:05:35.288 killing process with pid 3806038 00:05:35.288 04:02:49 -- common/autotest_common.sh@945 -- # kill 3806038 00:05:35.288 04:02:49 -- common/autotest_common.sh@950 -- # wait 3806038 00:05:36.228 00:05:36.228 real 0m2.497s 00:05:36.228 user 0m2.590s 00:05:36.228 sys 0m0.646s 00:05:36.228 04:02:50 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:36.228 04:02:50 -- common/autotest_common.sh@10 -- # set +x 00:05:36.228 ************************************ 00:05:36.228 END TEST locking_app_on_locked_coremask 00:05:36.228 ************************************ 00:05:36.228 04:02:50 -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:05:36.228 04:02:50 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:36.228 04:02:50 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:36.228 04:02:50 -- common/autotest_common.sh@10 -- # set +x 00:05:36.228 ************************************ 00:05:36.228 START TEST locking_overlapped_coremask 00:05:36.228 ************************************ 00:05:36.228 04:02:50 -- common/autotest_common.sh@1104 -- # locking_overlapped_coremask 00:05:36.228 04:02:50 -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=3806655 00:05:36.228 04:02:50 -- event/cpu_locks.sh@133 -- # waitforlisten 3806655 /var/tmp/spdk.sock 00:05:36.228 04:02:50 -- common/autotest_common.sh@819 -- # '[' -z 3806655 ']' 00:05:36.228 04:02:50 -- event/cpu_locks.sh@131 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 00:05:36.228 04:02:50 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:36.228 04:02:50 -- common/autotest_common.sh@824 -- # local max_retries=100 00:05:36.228 04:02:50 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:36.228 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:36.228 04:02:50 -- common/autotest_common.sh@828 -- # xtrace_disable 00:05:36.228 04:02:50 -- common/autotest_common.sh@10 -- # set +x 00:05:36.228 [2024-05-14 04:02:50.776378] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:05:36.228 [2024-05-14 04:02:50.776506] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3806655 ] 00:05:36.489 EAL: No free 2048 kB hugepages reported on node 1 00:05:36.489 [2024-05-14 04:02:50.894054] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:36.489 [2024-05-14 04:02:50.986893] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:36.489 [2024-05-14 04:02:50.987222] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:36.489 [2024-05-14 04:02:50.987239] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:36.489 [2024-05-14 04:02:50.987245] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:05:37.059 04:02:51 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:05:37.059 04:02:51 -- common/autotest_common.sh@852 -- # return 0 00:05:37.059 04:02:51 -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=3806691 00:05:37.059 04:02:51 -- event/cpu_locks.sh@135 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:05:37.059 04:02:51 -- event/cpu_locks.sh@137 -- # NOT waitforlisten 3806691 /var/tmp/spdk2.sock 00:05:37.059 04:02:51 -- common/autotest_common.sh@640 -- # local es=0 00:05:37.059 04:02:51 -- common/autotest_common.sh@642 -- # valid_exec_arg waitforlisten 3806691 /var/tmp/spdk2.sock 00:05:37.059 04:02:51 -- common/autotest_common.sh@628 -- # local arg=waitforlisten 00:05:37.059 04:02:51 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:05:37.059 04:02:51 -- common/autotest_common.sh@632 -- # type -t waitforlisten 00:05:37.059 04:02:51 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:05:37.059 04:02:51 -- common/autotest_common.sh@643 -- # waitforlisten 3806691 /var/tmp/spdk2.sock 00:05:37.059 04:02:51 -- common/autotest_common.sh@819 -- # '[' -z 3806691 ']' 00:05:37.059 04:02:51 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:37.059 04:02:51 -- common/autotest_common.sh@824 -- # local max_retries=100 00:05:37.059 04:02:51 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:37.059 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:37.059 04:02:51 -- common/autotest_common.sh@828 -- # xtrace_disable 00:05:37.059 04:02:51 -- common/autotest_common.sh@10 -- # set +x 00:05:37.059 [2024-05-14 04:02:51.577333] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:05:37.059 [2024-05-14 04:02:51.577433] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3806691 ] 00:05:37.059 EAL: No free 2048 kB hugepages reported on node 1 00:05:37.316 [2024-05-14 04:02:51.707403] app.c: 665:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 3806655 has claimed it. 00:05:37.316 [2024-05-14 04:02:51.707452] app.c: 791:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:05:37.573 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/common/autotest_common.sh: line 834: kill: (3806691) - No such process 00:05:37.573 ERROR: process (pid: 3806691) is no longer running 00:05:37.573 04:02:52 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:05:37.573 04:02:52 -- common/autotest_common.sh@852 -- # return 1 00:05:37.573 04:02:52 -- common/autotest_common.sh@643 -- # es=1 00:05:37.573 04:02:52 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:05:37.573 04:02:52 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:05:37.573 04:02:52 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:05:37.573 04:02:52 -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:05:37.573 04:02:52 -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:05:37.573 04:02:52 -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:05:37.573 04:02:52 -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:05:37.573 04:02:52 -- event/cpu_locks.sh@141 -- # killprocess 3806655 00:05:37.573 04:02:52 -- common/autotest_common.sh@926 -- # '[' -z 3806655 ']' 00:05:37.573 04:02:52 -- common/autotest_common.sh@930 -- # kill -0 3806655 00:05:37.573 04:02:52 -- common/autotest_common.sh@931 -- # uname 00:05:37.573 04:02:52 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:05:37.573 04:02:52 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3806655 00:05:37.833 04:02:52 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:05:37.833 04:02:52 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:05:37.833 04:02:52 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3806655' 00:05:37.833 killing process with pid 3806655 00:05:37.833 04:02:52 -- common/autotest_common.sh@945 -- # kill 3806655 00:05:37.833 04:02:52 -- common/autotest_common.sh@950 -- # wait 3806655 00:05:38.772 00:05:38.772 real 0m2.342s 00:05:38.772 user 0m6.147s 00:05:38.772 sys 0m0.528s 00:05:38.772 04:02:53 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:38.772 04:02:53 -- common/autotest_common.sh@10 -- # set +x 00:05:38.772 ************************************ 00:05:38.772 END TEST locking_overlapped_coremask 00:05:38.772 ************************************ 00:05:38.772 04:02:53 -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:05:38.772 04:02:53 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:38.772 04:02:53 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:38.772 04:02:53 -- common/autotest_common.sh@10 -- # set +x 00:05:38.772 ************************************ 00:05:38.772 START TEST locking_overlapped_coremask_via_rpc 00:05:38.772 ************************************ 00:05:38.772 04:02:53 -- common/autotest_common.sh@1104 -- # locking_overlapped_coremask_via_rpc 00:05:38.772 04:02:53 -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=3807024 00:05:38.772 04:02:53 -- event/cpu_locks.sh@149 -- # waitforlisten 3807024 /var/tmp/spdk.sock 00:05:38.772 04:02:53 -- common/autotest_common.sh@819 -- # '[' -z 3807024 ']' 00:05:38.772 04:02:53 -- event/cpu_locks.sh@147 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:05:38.772 04:02:53 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:38.772 04:02:53 -- common/autotest_common.sh@824 -- # local max_retries=100 00:05:38.772 04:02:53 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:38.772 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:38.772 04:02:53 -- common/autotest_common.sh@828 -- # xtrace_disable 00:05:38.772 04:02:53 -- common/autotest_common.sh@10 -- # set +x 00:05:38.772 [2024-05-14 04:02:53.173667] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:05:38.772 [2024-05-14 04:02:53.173808] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3807024 ] 00:05:38.772 EAL: No free 2048 kB hugepages reported on node 1 00:05:38.772 [2024-05-14 04:02:53.304394] app.c: 795:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:38.772 [2024-05-14 04:02:53.304438] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:39.029 [2024-05-14 04:02:53.398120] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:39.029 [2024-05-14 04:02:53.398383] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:39.029 [2024-05-14 04:02:53.398403] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:39.029 [2024-05-14 04:02:53.398410] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:05:39.287 04:02:53 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:05:39.287 04:02:53 -- common/autotest_common.sh@852 -- # return 0 00:05:39.287 04:02:53 -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=3807321 00:05:39.287 04:02:53 -- event/cpu_locks.sh@153 -- # waitforlisten 3807321 /var/tmp/spdk2.sock 00:05:39.287 04:02:53 -- common/autotest_common.sh@819 -- # '[' -z 3807321 ']' 00:05:39.287 04:02:53 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:39.287 04:02:53 -- common/autotest_common.sh@824 -- # local max_retries=100 00:05:39.287 04:02:53 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:39.287 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:39.287 04:02:53 -- common/autotest_common.sh@828 -- # xtrace_disable 00:05:39.288 04:02:53 -- common/autotest_common.sh@10 -- # set +x 00:05:39.288 04:02:53 -- event/cpu_locks.sh@151 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:05:39.547 [2024-05-14 04:02:53.962544] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:05:39.547 [2024-05-14 04:02:53.962690] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3807321 ] 00:05:39.547 EAL: No free 2048 kB hugepages reported on node 1 00:05:39.547 [2024-05-14 04:02:54.133372] app.c: 795:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:39.547 [2024-05-14 04:02:54.133416] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:39.806 [2024-05-14 04:02:54.317245] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:39.806 [2024-05-14 04:02:54.317519] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:05:39.806 [2024-05-14 04:02:54.317656] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:05:39.806 [2024-05-14 04:02:54.317689] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:05:40.743 04:02:55 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:05:40.743 04:02:55 -- common/autotest_common.sh@852 -- # return 0 00:05:40.743 04:02:55 -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:05:40.743 04:02:55 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:40.743 04:02:55 -- common/autotest_common.sh@10 -- # set +x 00:05:40.743 04:02:55 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:40.743 04:02:55 -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:40.743 04:02:55 -- common/autotest_common.sh@640 -- # local es=0 00:05:40.743 04:02:55 -- common/autotest_common.sh@642 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:40.743 04:02:55 -- common/autotest_common.sh@628 -- # local arg=rpc_cmd 00:05:40.743 04:02:55 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:05:40.743 04:02:55 -- common/autotest_common.sh@632 -- # type -t rpc_cmd 00:05:40.743 04:02:55 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:05:40.743 04:02:55 -- common/autotest_common.sh@643 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:40.743 04:02:55 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:40.743 04:02:55 -- common/autotest_common.sh@10 -- # set +x 00:05:40.743 [2024-05-14 04:02:55.323285] app.c: 665:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 3807024 has claimed it. 00:05:41.003 request: 00:05:41.003 { 00:05:41.003 "method": "framework_enable_cpumask_locks", 00:05:41.003 "req_id": 1 00:05:41.003 } 00:05:41.003 Got JSON-RPC error response 00:05:41.003 response: 00:05:41.003 { 00:05:41.003 "code": -32603, 00:05:41.003 "message": "Failed to claim CPU core: 2" 00:05:41.003 } 00:05:41.003 04:02:55 -- common/autotest_common.sh@579 -- # [[ 1 == 0 ]] 00:05:41.003 04:02:55 -- common/autotest_common.sh@643 -- # es=1 00:05:41.003 04:02:55 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:05:41.003 04:02:55 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:05:41.003 04:02:55 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:05:41.003 04:02:55 -- event/cpu_locks.sh@158 -- # waitforlisten 3807024 /var/tmp/spdk.sock 00:05:41.003 04:02:55 -- common/autotest_common.sh@819 -- # '[' -z 3807024 ']' 00:05:41.003 04:02:55 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:41.003 04:02:55 -- common/autotest_common.sh@824 -- # local max_retries=100 00:05:41.003 04:02:55 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:41.003 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:41.003 04:02:55 -- common/autotest_common.sh@828 -- # xtrace_disable 00:05:41.003 04:02:55 -- common/autotest_common.sh@10 -- # set +x 00:05:41.003 04:02:55 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:05:41.003 04:02:55 -- common/autotest_common.sh@852 -- # return 0 00:05:41.003 04:02:55 -- event/cpu_locks.sh@159 -- # waitforlisten 3807321 /var/tmp/spdk2.sock 00:05:41.003 04:02:55 -- common/autotest_common.sh@819 -- # '[' -z 3807321 ']' 00:05:41.003 04:02:55 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:41.003 04:02:55 -- common/autotest_common.sh@824 -- # local max_retries=100 00:05:41.003 04:02:55 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:41.003 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:41.003 04:02:55 -- common/autotest_common.sh@828 -- # xtrace_disable 00:05:41.003 04:02:55 -- common/autotest_common.sh@10 -- # set +x 00:05:41.261 04:02:55 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:05:41.261 04:02:55 -- common/autotest_common.sh@852 -- # return 0 00:05:41.261 04:02:55 -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:05:41.261 04:02:55 -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:05:41.262 04:02:55 -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:05:41.262 04:02:55 -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:05:41.262 00:05:41.262 real 0m2.563s 00:05:41.262 user 0m0.803s 00:05:41.262 sys 0m0.177s 00:05:41.262 04:02:55 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:41.262 04:02:55 -- common/autotest_common.sh@10 -- # set +x 00:05:41.262 ************************************ 00:05:41.262 END TEST locking_overlapped_coremask_via_rpc 00:05:41.262 ************************************ 00:05:41.262 04:02:55 -- event/cpu_locks.sh@174 -- # cleanup 00:05:41.262 04:02:55 -- event/cpu_locks.sh@15 -- # [[ -z 3807024 ]] 00:05:41.262 04:02:55 -- event/cpu_locks.sh@15 -- # killprocess 3807024 00:05:41.262 04:02:55 -- common/autotest_common.sh@926 -- # '[' -z 3807024 ']' 00:05:41.262 04:02:55 -- common/autotest_common.sh@930 -- # kill -0 3807024 00:05:41.262 04:02:55 -- common/autotest_common.sh@931 -- # uname 00:05:41.262 04:02:55 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:05:41.262 04:02:55 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3807024 00:05:41.262 04:02:55 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:05:41.262 04:02:55 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:05:41.262 04:02:55 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3807024' 00:05:41.262 killing process with pid 3807024 00:05:41.262 04:02:55 -- common/autotest_common.sh@945 -- # kill 3807024 00:05:41.262 04:02:55 -- common/autotest_common.sh@950 -- # wait 3807024 00:05:42.196 04:02:56 -- event/cpu_locks.sh@16 -- # [[ -z 3807321 ]] 00:05:42.196 04:02:56 -- event/cpu_locks.sh@16 -- # killprocess 3807321 00:05:42.196 04:02:56 -- common/autotest_common.sh@926 -- # '[' -z 3807321 ']' 00:05:42.196 04:02:56 -- common/autotest_common.sh@930 -- # kill -0 3807321 00:05:42.196 04:02:56 -- common/autotest_common.sh@931 -- # uname 00:05:42.196 04:02:56 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:05:42.196 04:02:56 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3807321 00:05:42.196 04:02:56 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:05:42.196 04:02:56 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:05:42.196 04:02:56 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3807321' 00:05:42.196 killing process with pid 3807321 00:05:42.196 04:02:56 -- common/autotest_common.sh@945 -- # kill 3807321 00:05:42.196 04:02:56 -- common/autotest_common.sh@950 -- # wait 3807321 00:05:43.135 04:02:57 -- event/cpu_locks.sh@18 -- # rm -f 00:05:43.135 04:02:57 -- event/cpu_locks.sh@1 -- # cleanup 00:05:43.135 04:02:57 -- event/cpu_locks.sh@15 -- # [[ -z 3807024 ]] 00:05:43.135 04:02:57 -- event/cpu_locks.sh@15 -- # killprocess 3807024 00:05:43.135 04:02:57 -- common/autotest_common.sh@926 -- # '[' -z 3807024 ']' 00:05:43.135 04:02:57 -- common/autotest_common.sh@930 -- # kill -0 3807024 00:05:43.135 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/common/autotest_common.sh: line 930: kill: (3807024) - No such process 00:05:43.135 04:02:57 -- common/autotest_common.sh@953 -- # echo 'Process with pid 3807024 is not found' 00:05:43.135 Process with pid 3807024 is not found 00:05:43.135 04:02:57 -- event/cpu_locks.sh@16 -- # [[ -z 3807321 ]] 00:05:43.135 04:02:57 -- event/cpu_locks.sh@16 -- # killprocess 3807321 00:05:43.135 04:02:57 -- common/autotest_common.sh@926 -- # '[' -z 3807321 ']' 00:05:43.135 04:02:57 -- common/autotest_common.sh@930 -- # kill -0 3807321 00:05:43.135 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/common/autotest_common.sh: line 930: kill: (3807321) - No such process 00:05:43.135 04:02:57 -- common/autotest_common.sh@953 -- # echo 'Process with pid 3807321 is not found' 00:05:43.135 Process with pid 3807321 is not found 00:05:43.135 04:02:57 -- event/cpu_locks.sh@18 -- # rm -f 00:05:43.135 00:05:43.135 real 0m23.309s 00:05:43.135 user 0m39.499s 00:05:43.135 sys 0m5.291s 00:05:43.135 04:02:57 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:43.135 04:02:57 -- common/autotest_common.sh@10 -- # set +x 00:05:43.135 ************************************ 00:05:43.135 END TEST cpu_locks 00:05:43.135 ************************************ 00:05:43.135 00:05:43.135 real 0m47.285s 00:05:43.135 user 1m25.620s 00:05:43.135 sys 0m8.363s 00:05:43.135 04:02:57 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:43.135 04:02:57 -- common/autotest_common.sh@10 -- # set +x 00:05:43.135 ************************************ 00:05:43.135 END TEST event 00:05:43.135 ************************************ 00:05:43.135 04:02:57 -- spdk/autotest.sh@188 -- # run_test thread /var/jenkins/workspace/dsa-phy-autotest/spdk/test/thread/thread.sh 00:05:43.135 04:02:57 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:43.135 04:02:57 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:43.135 04:02:57 -- common/autotest_common.sh@10 -- # set +x 00:05:43.135 ************************************ 00:05:43.135 START TEST thread 00:05:43.135 ************************************ 00:05:43.135 04:02:57 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/thread/thread.sh 00:05:43.135 * Looking for test storage... 00:05:43.135 * Found test storage at /var/jenkins/workspace/dsa-phy-autotest/spdk/test/thread 00:05:43.135 04:02:57 -- thread/thread.sh@11 -- # run_test thread_poller_perf /var/jenkins/workspace/dsa-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:05:43.135 04:02:57 -- common/autotest_common.sh@1077 -- # '[' 8 -le 1 ']' 00:05:43.135 04:02:57 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:43.135 04:02:57 -- common/autotest_common.sh@10 -- # set +x 00:05:43.135 ************************************ 00:05:43.135 START TEST thread_poller_perf 00:05:43.135 ************************************ 00:05:43.135 04:02:57 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:05:43.135 [2024-05-14 04:02:57.636105] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:05:43.135 [2024-05-14 04:02:57.636296] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3808038 ] 00:05:43.394 EAL: No free 2048 kB hugepages reported on node 1 00:05:43.394 [2024-05-14 04:02:57.768680] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:43.394 [2024-05-14 04:02:57.860368] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:43.394 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:05:44.775 ====================================== 00:05:44.775 busy:1910225142 (cyc) 00:05:44.775 total_run_count: 381000 00:05:44.775 tsc_hz: 1900000000 (cyc) 00:05:44.775 ====================================== 00:05:44.775 poller_cost: 5013 (cyc), 2638 (nsec) 00:05:44.775 00:05:44.775 real 0m1.426s 00:05:44.775 user 0m1.267s 00:05:44.775 sys 0m0.150s 00:05:44.775 04:02:59 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:44.775 04:02:59 -- common/autotest_common.sh@10 -- # set +x 00:05:44.775 ************************************ 00:05:44.775 END TEST thread_poller_perf 00:05:44.775 ************************************ 00:05:44.775 04:02:59 -- thread/thread.sh@12 -- # run_test thread_poller_perf /var/jenkins/workspace/dsa-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:05:44.775 04:02:59 -- common/autotest_common.sh@1077 -- # '[' 8 -le 1 ']' 00:05:44.775 04:02:59 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:44.775 04:02:59 -- common/autotest_common.sh@10 -- # set +x 00:05:44.775 ************************************ 00:05:44.775 START TEST thread_poller_perf 00:05:44.775 ************************************ 00:05:44.775 04:02:59 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:05:44.775 [2024-05-14 04:02:59.104834] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:05:44.775 [2024-05-14 04:02:59.104982] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3808355 ] 00:05:44.775 EAL: No free 2048 kB hugepages reported on node 1 00:05:44.775 [2024-05-14 04:02:59.218287] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:44.775 [2024-05-14 04:02:59.306732] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:44.775 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:05:46.157 ====================================== 00:05:46.157 busy:1902370614 (cyc) 00:05:46.157 total_run_count: 5232000 00:05:46.157 tsc_hz: 1900000000 (cyc) 00:05:46.157 ====================================== 00:05:46.157 poller_cost: 363 (cyc), 191 (nsec) 00:05:46.157 00:05:46.157 real 0m1.394s 00:05:46.157 user 0m1.246s 00:05:46.157 sys 0m0.138s 00:05:46.157 04:03:00 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:46.157 04:03:00 -- common/autotest_common.sh@10 -- # set +x 00:05:46.157 ************************************ 00:05:46.157 END TEST thread_poller_perf 00:05:46.157 ************************************ 00:05:46.157 04:03:00 -- thread/thread.sh@17 -- # [[ y != \y ]] 00:05:46.157 00:05:46.157 real 0m2.975s 00:05:46.157 user 0m2.578s 00:05:46.157 sys 0m0.399s 00:05:46.157 04:03:00 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:46.157 04:03:00 -- common/autotest_common.sh@10 -- # set +x 00:05:46.157 ************************************ 00:05:46.157 END TEST thread 00:05:46.157 ************************************ 00:05:46.157 04:03:00 -- spdk/autotest.sh@189 -- # run_test accel /var/jenkins/workspace/dsa-phy-autotest/spdk/test/accel/accel.sh 00:05:46.157 04:03:00 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:46.157 04:03:00 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:46.157 04:03:00 -- common/autotest_common.sh@10 -- # set +x 00:05:46.157 ************************************ 00:05:46.157 START TEST accel 00:05:46.157 ************************************ 00:05:46.157 04:03:00 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/accel/accel.sh 00:05:46.157 * Looking for test storage... 00:05:46.158 * Found test storage at /var/jenkins/workspace/dsa-phy-autotest/spdk/test/accel 00:05:46.158 04:03:00 -- accel/accel.sh@73 -- # declare -A expected_opcs 00:05:46.158 04:03:00 -- accel/accel.sh@74 -- # get_expected_opcs 00:05:46.158 04:03:00 -- accel/accel.sh@57 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:05:46.158 04:03:00 -- accel/accel.sh@59 -- # spdk_tgt_pid=3808763 00:05:46.158 04:03:00 -- accel/accel.sh@60 -- # waitforlisten 3808763 00:05:46.158 04:03:00 -- common/autotest_common.sh@819 -- # '[' -z 3808763 ']' 00:05:46.158 04:03:00 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:46.158 04:03:00 -- common/autotest_common.sh@824 -- # local max_retries=100 00:05:46.158 04:03:00 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:46.158 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:46.158 04:03:00 -- common/autotest_common.sh@828 -- # xtrace_disable 00:05:46.158 04:03:00 -- common/autotest_common.sh@10 -- # set +x 00:05:46.158 04:03:00 -- accel/accel.sh@58 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/spdk_tgt -c /dev/fd/63 00:05:46.158 04:03:00 -- accel/accel.sh@58 -- # build_accel_config 00:05:46.158 04:03:00 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:46.158 04:03:00 -- accel/accel.sh@33 -- # [[ 1 -gt 0 ]] 00:05:46.158 04:03:00 -- accel/accel.sh@33 -- # accel_json_cfg+=('{"method": "dsa_scan_accel_module"}') 00:05:46.158 04:03:00 -- accel/accel.sh@34 -- # [[ 1 -gt 0 ]] 00:05:46.158 04:03:00 -- accel/accel.sh@34 -- # accel_json_cfg+=('{"method": "iaa_scan_accel_module"}') 00:05:46.158 04:03:00 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:46.158 04:03:00 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:46.158 04:03:00 -- accel/accel.sh@41 -- # local IFS=, 00:05:46.158 04:03:00 -- accel/accel.sh@42 -- # jq -r . 00:05:46.158 [2024-05-14 04:03:00.715492] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:05:46.158 [2024-05-14 04:03:00.715640] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3808763 ] 00:05:46.421 EAL: No free 2048 kB hugepages reported on node 1 00:05:46.421 [2024-05-14 04:03:00.849811] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:46.421 [2024-05-14 04:03:00.943893] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:46.421 [2024-05-14 04:03:00.944105] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:46.421 [2024-05-14 04:03:00.948672] accel_dsa_rpc.c: 50:rpc_dsa_scan_accel_module: *NOTICE*: Enabled DSA user-mode 00:05:46.421 [2024-05-14 04:03:00.956642] accel_iaa_rpc.c: 33:rpc_iaa_scan_accel_module: *NOTICE*: Enabled IAA user-mode 00:05:54.622 04:03:09 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:05:54.622 04:03:09 -- common/autotest_common.sh@852 -- # return 0 00:05:54.622 04:03:09 -- accel/accel.sh@62 -- # exp_opcs=($($rpc_py accel_get_opc_assignments | jq -r ". | to_entries | map(\"\(.key)=\(.value)\") | .[]")) 00:05:54.622 04:03:09 -- accel/accel.sh@62 -- # rpc_cmd accel_get_opc_assignments 00:05:54.622 04:03:09 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:54.622 04:03:09 -- accel/accel.sh@62 -- # jq -r '. | to_entries | map("\(.key)=\(.value)") | .[]' 00:05:54.622 04:03:09 -- common/autotest_common.sh@10 -- # set +x 00:05:54.622 04:03:09 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:54.882 04:03:09 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:05:54.882 04:03:09 -- accel/accel.sh@64 -- # IFS== 00:05:54.882 04:03:09 -- accel/accel.sh@64 -- # read -r opc module 00:05:54.882 04:03:09 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=dsa 00:05:54.882 04:03:09 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:05:54.882 04:03:09 -- accel/accel.sh@64 -- # IFS== 00:05:54.882 04:03:09 -- accel/accel.sh@64 -- # read -r opc module 00:05:54.882 04:03:09 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=dsa 00:05:54.882 04:03:09 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:05:54.882 04:03:09 -- accel/accel.sh@64 -- # IFS== 00:05:54.882 04:03:09 -- accel/accel.sh@64 -- # read -r opc module 00:05:54.882 04:03:09 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=dsa 00:05:54.882 04:03:09 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:05:54.882 04:03:09 -- accel/accel.sh@64 -- # IFS== 00:05:54.882 04:03:09 -- accel/accel.sh@64 -- # read -r opc module 00:05:54.882 04:03:09 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=dsa 00:05:54.882 04:03:09 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:05:54.882 04:03:09 -- accel/accel.sh@64 -- # IFS== 00:05:54.882 04:03:09 -- accel/accel.sh@64 -- # read -r opc module 00:05:54.882 04:03:09 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=dsa 00:05:54.882 04:03:09 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:05:54.882 04:03:09 -- accel/accel.sh@64 -- # IFS== 00:05:54.882 04:03:09 -- accel/accel.sh@64 -- # read -r opc module 00:05:54.882 04:03:09 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=dsa 00:05:54.882 04:03:09 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:05:54.882 04:03:09 -- accel/accel.sh@64 -- # IFS== 00:05:54.882 04:03:09 -- accel/accel.sh@64 -- # read -r opc module 00:05:54.882 04:03:09 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=iaa 00:05:54.882 04:03:09 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:05:54.882 04:03:09 -- accel/accel.sh@64 -- # IFS== 00:05:54.882 04:03:09 -- accel/accel.sh@64 -- # read -r opc module 00:05:54.882 04:03:09 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=iaa 00:05:54.882 04:03:09 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:05:54.882 04:03:09 -- accel/accel.sh@64 -- # IFS== 00:05:54.882 04:03:09 -- accel/accel.sh@64 -- # read -r opc module 00:05:54.882 04:03:09 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:05:54.882 04:03:09 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:05:54.882 04:03:09 -- accel/accel.sh@64 -- # IFS== 00:05:54.882 04:03:09 -- accel/accel.sh@64 -- # read -r opc module 00:05:54.882 04:03:09 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:05:54.882 04:03:09 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:05:54.882 04:03:09 -- accel/accel.sh@64 -- # IFS== 00:05:54.882 04:03:09 -- accel/accel.sh@64 -- # read -r opc module 00:05:54.882 04:03:09 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:05:54.882 04:03:09 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:05:54.882 04:03:09 -- accel/accel.sh@64 -- # IFS== 00:05:54.882 04:03:09 -- accel/accel.sh@64 -- # read -r opc module 00:05:54.882 04:03:09 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=dsa 00:05:54.882 04:03:09 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:05:54.882 04:03:09 -- accel/accel.sh@64 -- # IFS== 00:05:54.882 04:03:09 -- accel/accel.sh@64 -- # read -r opc module 00:05:54.882 04:03:09 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:05:54.882 04:03:09 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:05:54.882 04:03:09 -- accel/accel.sh@64 -- # IFS== 00:05:54.882 04:03:09 -- accel/accel.sh@64 -- # read -r opc module 00:05:54.882 04:03:09 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=dsa 00:05:54.882 04:03:09 -- accel/accel.sh@67 -- # killprocess 3808763 00:05:54.882 04:03:09 -- common/autotest_common.sh@926 -- # '[' -z 3808763 ']' 00:05:54.882 04:03:09 -- common/autotest_common.sh@930 -- # kill -0 3808763 00:05:54.882 04:03:09 -- common/autotest_common.sh@931 -- # uname 00:05:54.882 04:03:09 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:05:54.882 04:03:09 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3808763 00:05:54.882 04:03:09 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:05:54.882 04:03:09 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:05:54.882 04:03:09 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3808763' 00:05:54.882 killing process with pid 3808763 00:05:54.882 04:03:09 -- common/autotest_common.sh@945 -- # kill 3808763 00:05:54.882 04:03:09 -- common/autotest_common.sh@950 -- # wait 3808763 00:05:58.183 04:03:12 -- accel/accel.sh@68 -- # trap - ERR 00:05:58.183 04:03:12 -- accel/accel.sh@81 -- # run_test accel_help accel_perf -h 00:05:58.183 04:03:12 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:05:58.183 04:03:12 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:58.183 04:03:12 -- common/autotest_common.sh@10 -- # set +x 00:05:58.183 04:03:12 -- common/autotest_common.sh@1104 -- # accel_perf -h 00:05:58.183 04:03:12 -- accel/accel.sh@12 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -h 00:05:58.183 04:03:12 -- accel/accel.sh@12 -- # build_accel_config 00:05:58.183 04:03:12 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:58.183 04:03:12 -- accel/accel.sh@33 -- # [[ 1 -gt 0 ]] 00:05:58.183 04:03:12 -- accel/accel.sh@33 -- # accel_json_cfg+=('{"method": "dsa_scan_accel_module"}') 00:05:58.183 04:03:12 -- accel/accel.sh@34 -- # [[ 1 -gt 0 ]] 00:05:58.183 04:03:12 -- accel/accel.sh@34 -- # accel_json_cfg+=('{"method": "iaa_scan_accel_module"}') 00:05:58.183 04:03:12 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:58.183 04:03:12 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:58.183 04:03:12 -- accel/accel.sh@41 -- # local IFS=, 00:05:58.183 04:03:12 -- accel/accel.sh@42 -- # jq -r . 00:05:58.183 04:03:12 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:58.183 04:03:12 -- common/autotest_common.sh@10 -- # set +x 00:05:58.183 04:03:12 -- accel/accel.sh@83 -- # run_test accel_missing_filename NOT accel_perf -t 1 -w compress 00:05:58.183 04:03:12 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:05:58.183 04:03:12 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:58.183 04:03:12 -- common/autotest_common.sh@10 -- # set +x 00:05:58.183 ************************************ 00:05:58.183 START TEST accel_missing_filename 00:05:58.183 ************************************ 00:05:58.183 04:03:12 -- common/autotest_common.sh@1104 -- # NOT accel_perf -t 1 -w compress 00:05:58.183 04:03:12 -- common/autotest_common.sh@640 -- # local es=0 00:05:58.183 04:03:12 -- common/autotest_common.sh@642 -- # valid_exec_arg accel_perf -t 1 -w compress 00:05:58.183 04:03:12 -- common/autotest_common.sh@628 -- # local arg=accel_perf 00:05:58.183 04:03:12 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:05:58.183 04:03:12 -- common/autotest_common.sh@632 -- # type -t accel_perf 00:05:58.183 04:03:12 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:05:58.183 04:03:12 -- common/autotest_common.sh@643 -- # accel_perf -t 1 -w compress 00:05:58.183 04:03:12 -- accel/accel.sh@12 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress 00:05:58.183 04:03:12 -- accel/accel.sh@12 -- # build_accel_config 00:05:58.183 04:03:12 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:58.183 04:03:12 -- accel/accel.sh@33 -- # [[ 1 -gt 0 ]] 00:05:58.183 04:03:12 -- accel/accel.sh@33 -- # accel_json_cfg+=('{"method": "dsa_scan_accel_module"}') 00:05:58.183 04:03:12 -- accel/accel.sh@34 -- # [[ 1 -gt 0 ]] 00:05:58.183 04:03:12 -- accel/accel.sh@34 -- # accel_json_cfg+=('{"method": "iaa_scan_accel_module"}') 00:05:58.183 04:03:12 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:58.183 04:03:12 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:58.183 04:03:12 -- accel/accel.sh@41 -- # local IFS=, 00:05:58.183 04:03:12 -- accel/accel.sh@42 -- # jq -r . 00:05:58.183 [2024-05-14 04:03:12.706837] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:05:58.183 [2024-05-14 04:03:12.706988] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3811721 ] 00:05:58.444 EAL: No free 2048 kB hugepages reported on node 1 00:05:58.444 [2024-05-14 04:03:12.840291] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:58.444 [2024-05-14 04:03:12.930907] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:58.444 [2024-05-14 04:03:12.935492] accel_dsa_rpc.c: 50:rpc_dsa_scan_accel_module: *NOTICE*: Enabled DSA user-mode 00:05:58.444 [2024-05-14 04:03:12.943466] accel_iaa_rpc.c: 33:rpc_iaa_scan_accel_module: *NOTICE*: Enabled IAA user-mode 00:06:05.027 [2024-05-14 04:03:19.339059] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:06.937 [2024-05-14 04:03:21.228411] accel_perf.c:1385:main: *ERROR*: ERROR starting application 00:06:06.937 A filename is required. 00:06:06.937 04:03:21 -- common/autotest_common.sh@643 -- # es=234 00:06:06.937 04:03:21 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:06:06.937 04:03:21 -- common/autotest_common.sh@652 -- # es=106 00:06:06.937 04:03:21 -- common/autotest_common.sh@653 -- # case "$es" in 00:06:06.937 04:03:21 -- common/autotest_common.sh@660 -- # es=1 00:06:06.937 04:03:21 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:06:06.937 00:06:06.937 real 0m8.734s 00:06:06.937 user 0m2.326s 00:06:06.937 sys 0m0.263s 00:06:06.937 04:03:21 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:06.937 04:03:21 -- common/autotest_common.sh@10 -- # set +x 00:06:06.937 ************************************ 00:06:06.937 END TEST accel_missing_filename 00:06:06.937 ************************************ 00:06:06.937 04:03:21 -- accel/accel.sh@85 -- # run_test accel_compress_verify NOT accel_perf -t 1 -w compress -l /var/jenkins/workspace/dsa-phy-autotest/spdk/test/accel/bib -y 00:06:06.937 04:03:21 -- common/autotest_common.sh@1077 -- # '[' 10 -le 1 ']' 00:06:06.937 04:03:21 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:06.937 04:03:21 -- common/autotest_common.sh@10 -- # set +x 00:06:06.937 ************************************ 00:06:06.938 START TEST accel_compress_verify 00:06:06.938 ************************************ 00:06:06.938 04:03:21 -- common/autotest_common.sh@1104 -- # NOT accel_perf -t 1 -w compress -l /var/jenkins/workspace/dsa-phy-autotest/spdk/test/accel/bib -y 00:06:06.938 04:03:21 -- common/autotest_common.sh@640 -- # local es=0 00:06:06.938 04:03:21 -- common/autotest_common.sh@642 -- # valid_exec_arg accel_perf -t 1 -w compress -l /var/jenkins/workspace/dsa-phy-autotest/spdk/test/accel/bib -y 00:06:06.938 04:03:21 -- common/autotest_common.sh@628 -- # local arg=accel_perf 00:06:06.938 04:03:21 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:06:06.938 04:03:21 -- common/autotest_common.sh@632 -- # type -t accel_perf 00:06:06.938 04:03:21 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:06:06.938 04:03:21 -- common/autotest_common.sh@643 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/dsa-phy-autotest/spdk/test/accel/bib -y 00:06:06.938 04:03:21 -- accel/accel.sh@12 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/dsa-phy-autotest/spdk/test/accel/bib -y 00:06:06.938 04:03:21 -- accel/accel.sh@12 -- # build_accel_config 00:06:06.938 04:03:21 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:06.938 04:03:21 -- accel/accel.sh@33 -- # [[ 1 -gt 0 ]] 00:06:06.938 04:03:21 -- accel/accel.sh@33 -- # accel_json_cfg+=('{"method": "dsa_scan_accel_module"}') 00:06:06.938 04:03:21 -- accel/accel.sh@34 -- # [[ 1 -gt 0 ]] 00:06:06.938 04:03:21 -- accel/accel.sh@34 -- # accel_json_cfg+=('{"method": "iaa_scan_accel_module"}') 00:06:06.938 04:03:21 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:06.938 04:03:21 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:06.938 04:03:21 -- accel/accel.sh@41 -- # local IFS=, 00:06:06.938 04:03:21 -- accel/accel.sh@42 -- # jq -r . 00:06:06.938 [2024-05-14 04:03:21.471849] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:06:06.938 [2024-05-14 04:03:21.471990] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3813537 ] 00:06:07.197 EAL: No free 2048 kB hugepages reported on node 1 00:06:07.197 [2024-05-14 04:03:21.570871] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:07.197 [2024-05-14 04:03:21.659404] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:07.197 [2024-05-14 04:03:21.663944] accel_dsa_rpc.c: 50:rpc_dsa_scan_accel_module: *NOTICE*: Enabled DSA user-mode 00:06:07.198 [2024-05-14 04:03:21.671924] accel_iaa_rpc.c: 33:rpc_iaa_scan_accel_module: *NOTICE*: Enabled IAA user-mode 00:06:13.787 [2024-05-14 04:03:28.055062] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:15.700 [2024-05-14 04:03:29.904359] accel_perf.c:1385:main: *ERROR*: ERROR starting application 00:06:15.700 00:06:15.700 Compression does not support the verify option, aborting. 00:06:15.700 04:03:30 -- common/autotest_common.sh@643 -- # es=161 00:06:15.700 04:03:30 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:06:15.700 04:03:30 -- common/autotest_common.sh@652 -- # es=33 00:06:15.700 04:03:30 -- common/autotest_common.sh@653 -- # case "$es" in 00:06:15.700 04:03:30 -- common/autotest_common.sh@660 -- # es=1 00:06:15.700 04:03:30 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:06:15.700 00:06:15.700 real 0m8.641s 00:06:15.700 user 0m2.264s 00:06:15.700 sys 0m0.237s 00:06:15.700 04:03:30 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:15.700 04:03:30 -- common/autotest_common.sh@10 -- # set +x 00:06:15.700 ************************************ 00:06:15.700 END TEST accel_compress_verify 00:06:15.700 ************************************ 00:06:15.700 04:03:30 -- accel/accel.sh@87 -- # run_test accel_wrong_workload NOT accel_perf -t 1 -w foobar 00:06:15.700 04:03:30 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:06:15.700 04:03:30 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:15.700 04:03:30 -- common/autotest_common.sh@10 -- # set +x 00:06:15.700 ************************************ 00:06:15.700 START TEST accel_wrong_workload 00:06:15.700 ************************************ 00:06:15.700 04:03:30 -- common/autotest_common.sh@1104 -- # NOT accel_perf -t 1 -w foobar 00:06:15.700 04:03:30 -- common/autotest_common.sh@640 -- # local es=0 00:06:15.700 04:03:30 -- common/autotest_common.sh@642 -- # valid_exec_arg accel_perf -t 1 -w foobar 00:06:15.700 04:03:30 -- common/autotest_common.sh@628 -- # local arg=accel_perf 00:06:15.700 04:03:30 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:06:15.700 04:03:30 -- common/autotest_common.sh@632 -- # type -t accel_perf 00:06:15.700 04:03:30 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:06:15.700 04:03:30 -- common/autotest_common.sh@643 -- # accel_perf -t 1 -w foobar 00:06:15.700 04:03:30 -- accel/accel.sh@12 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w foobar 00:06:15.700 04:03:30 -- accel/accel.sh@12 -- # build_accel_config 00:06:15.700 04:03:30 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:15.700 04:03:30 -- accel/accel.sh@33 -- # [[ 1 -gt 0 ]] 00:06:15.700 04:03:30 -- accel/accel.sh@33 -- # accel_json_cfg+=('{"method": "dsa_scan_accel_module"}') 00:06:15.700 04:03:30 -- accel/accel.sh@34 -- # [[ 1 -gt 0 ]] 00:06:15.700 04:03:30 -- accel/accel.sh@34 -- # accel_json_cfg+=('{"method": "iaa_scan_accel_module"}') 00:06:15.700 04:03:30 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:15.700 04:03:30 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:15.700 04:03:30 -- accel/accel.sh@41 -- # local IFS=, 00:06:15.700 04:03:30 -- accel/accel.sh@42 -- # jq -r . 00:06:15.700 Unsupported workload type: foobar 00:06:15.700 [2024-05-14 04:03:30.134742] app.c:1292:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'w' failed: 1 00:06:15.700 accel_perf options: 00:06:15.700 [-h help message] 00:06:15.700 [-q queue depth per core] 00:06:15.700 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:06:15.700 [-T number of threads per core 00:06:15.700 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:06:15.700 [-t time in seconds] 00:06:15.700 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:06:15.700 [ dif_verify, , dif_generate, dif_generate_copy 00:06:15.700 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:06:15.700 [-l for compress/decompress workloads, name of uncompressed input file 00:06:15.700 [-S for crc32c workload, use this seed value (default 0) 00:06:15.700 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:06:15.701 [-f for fill workload, use this BYTE value (default 255) 00:06:15.701 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:06:15.701 [-y verify result if this switch is on] 00:06:15.701 [-a tasks to allocate per core (default: same value as -q)] 00:06:15.701 Can be used to spread operations across a wider range of memory. 00:06:15.701 04:03:30 -- common/autotest_common.sh@643 -- # es=1 00:06:15.701 04:03:30 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:06:15.701 04:03:30 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:06:15.701 04:03:30 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:06:15.701 00:06:15.701 real 0m0.051s 00:06:15.701 user 0m0.052s 00:06:15.701 sys 0m0.028s 00:06:15.701 04:03:30 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:15.701 04:03:30 -- common/autotest_common.sh@10 -- # set +x 00:06:15.701 ************************************ 00:06:15.701 END TEST accel_wrong_workload 00:06:15.701 ************************************ 00:06:15.701 04:03:30 -- accel/accel.sh@89 -- # run_test accel_negative_buffers NOT accel_perf -t 1 -w xor -y -x -1 00:06:15.701 04:03:30 -- common/autotest_common.sh@1077 -- # '[' 10 -le 1 ']' 00:06:15.701 04:03:30 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:15.701 04:03:30 -- common/autotest_common.sh@10 -- # set +x 00:06:15.701 ************************************ 00:06:15.701 START TEST accel_negative_buffers 00:06:15.701 ************************************ 00:06:15.701 04:03:30 -- common/autotest_common.sh@1104 -- # NOT accel_perf -t 1 -w xor -y -x -1 00:06:15.701 04:03:30 -- common/autotest_common.sh@640 -- # local es=0 00:06:15.701 04:03:30 -- common/autotest_common.sh@642 -- # valid_exec_arg accel_perf -t 1 -w xor -y -x -1 00:06:15.701 04:03:30 -- common/autotest_common.sh@628 -- # local arg=accel_perf 00:06:15.701 04:03:30 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:06:15.701 04:03:30 -- common/autotest_common.sh@632 -- # type -t accel_perf 00:06:15.701 04:03:30 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:06:15.701 04:03:30 -- common/autotest_common.sh@643 -- # accel_perf -t 1 -w xor -y -x -1 00:06:15.701 04:03:30 -- accel/accel.sh@12 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x -1 00:06:15.701 04:03:30 -- accel/accel.sh@12 -- # build_accel_config 00:06:15.701 04:03:30 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:15.701 04:03:30 -- accel/accel.sh@33 -- # [[ 1 -gt 0 ]] 00:06:15.701 04:03:30 -- accel/accel.sh@33 -- # accel_json_cfg+=('{"method": "dsa_scan_accel_module"}') 00:06:15.701 04:03:30 -- accel/accel.sh@34 -- # [[ 1 -gt 0 ]] 00:06:15.701 04:03:30 -- accel/accel.sh@34 -- # accel_json_cfg+=('{"method": "iaa_scan_accel_module"}') 00:06:15.701 04:03:30 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:15.701 04:03:30 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:15.701 04:03:30 -- accel/accel.sh@41 -- # local IFS=, 00:06:15.701 04:03:30 -- accel/accel.sh@42 -- # jq -r . 00:06:15.701 -x option must be non-negative. 00:06:15.701 [2024-05-14 04:03:30.217763] app.c:1292:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'x' failed: 1 00:06:15.701 accel_perf options: 00:06:15.701 [-h help message] 00:06:15.701 [-q queue depth per core] 00:06:15.701 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:06:15.701 [-T number of threads per core 00:06:15.701 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:06:15.701 [-t time in seconds] 00:06:15.701 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:06:15.701 [ dif_verify, , dif_generate, dif_generate_copy 00:06:15.701 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:06:15.701 [-l for compress/decompress workloads, name of uncompressed input file 00:06:15.701 [-S for crc32c workload, use this seed value (default 0) 00:06:15.701 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:06:15.701 [-f for fill workload, use this BYTE value (default 255) 00:06:15.701 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:06:15.701 [-y verify result if this switch is on] 00:06:15.701 [-a tasks to allocate per core (default: same value as -q)] 00:06:15.701 Can be used to spread operations across a wider range of memory. 00:06:15.701 04:03:30 -- common/autotest_common.sh@643 -- # es=1 00:06:15.701 04:03:30 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:06:15.701 04:03:30 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:06:15.701 04:03:30 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:06:15.701 00:06:15.701 real 0m0.052s 00:06:15.701 user 0m0.050s 00:06:15.701 sys 0m0.033s 00:06:15.701 04:03:30 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:15.701 04:03:30 -- common/autotest_common.sh@10 -- # set +x 00:06:15.701 ************************************ 00:06:15.701 END TEST accel_negative_buffers 00:06:15.701 ************************************ 00:06:15.701 04:03:30 -- accel/accel.sh@93 -- # run_test accel_crc32c accel_test -t 1 -w crc32c -S 32 -y 00:06:15.701 04:03:30 -- common/autotest_common.sh@1077 -- # '[' 9 -le 1 ']' 00:06:15.701 04:03:30 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:15.701 04:03:30 -- common/autotest_common.sh@10 -- # set +x 00:06:15.701 ************************************ 00:06:15.701 START TEST accel_crc32c 00:06:15.701 ************************************ 00:06:15.701 04:03:30 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w crc32c -S 32 -y 00:06:15.701 04:03:30 -- accel/accel.sh@16 -- # local accel_opc 00:06:15.701 04:03:30 -- accel/accel.sh@17 -- # local accel_module 00:06:15.701 04:03:30 -- accel/accel.sh@18 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:06:15.701 04:03:30 -- accel/accel.sh@12 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:06:15.701 04:03:30 -- accel/accel.sh@12 -- # build_accel_config 00:06:15.701 04:03:30 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:15.701 04:03:30 -- accel/accel.sh@33 -- # [[ 1 -gt 0 ]] 00:06:15.701 04:03:30 -- accel/accel.sh@33 -- # accel_json_cfg+=('{"method": "dsa_scan_accel_module"}') 00:06:15.701 04:03:30 -- accel/accel.sh@34 -- # [[ 1 -gt 0 ]] 00:06:15.701 04:03:30 -- accel/accel.sh@34 -- # accel_json_cfg+=('{"method": "iaa_scan_accel_module"}') 00:06:15.701 04:03:30 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:15.701 04:03:30 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:15.701 04:03:30 -- accel/accel.sh@41 -- # local IFS=, 00:06:15.701 04:03:30 -- accel/accel.sh@42 -- # jq -r . 00:06:15.961 [2024-05-14 04:03:30.300514] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:06:15.961 [2024-05-14 04:03:30.300617] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3815171 ] 00:06:15.961 EAL: No free 2048 kB hugepages reported on node 1 00:06:15.961 [2024-05-14 04:03:30.418160] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:15.961 [2024-05-14 04:03:30.511410] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:15.961 [2024-05-14 04:03:30.515941] accel_dsa_rpc.c: 50:rpc_dsa_scan_accel_module: *NOTICE*: Enabled DSA user-mode 00:06:15.961 [2024-05-14 04:03:30.523927] accel_iaa_rpc.c: 33:rpc_iaa_scan_accel_module: *NOTICE*: Enabled IAA user-mode 00:06:25.998 04:03:39 -- accel/accel.sh@18 -- # out=' 00:06:25.998 SPDK Configuration: 00:06:25.998 Core mask: 0x1 00:06:25.998 00:06:25.998 Accel Perf Configuration: 00:06:25.998 Workload Type: crc32c 00:06:25.998 CRC-32C seed: 32 00:06:25.998 Transfer size: 4096 bytes 00:06:25.998 Vector count 1 00:06:25.998 Module: dsa 00:06:25.998 Queue depth: 32 00:06:25.998 Allocate depth: 32 00:06:25.998 # threads/core: 1 00:06:25.998 Run time: 1 seconds 00:06:25.998 Verify: Yes 00:06:25.998 00:06:25.998 Running for 1 seconds... 00:06:25.998 00:06:25.998 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:25.998 ------------------------------------------------------------------------------------ 00:06:25.998 0,0 355680/s 1389 MiB/s 0 0 00:06:25.998 ==================================================================================== 00:06:25.998 Total 355680/s 1389 MiB/s 0 0' 00:06:25.998 04:03:39 -- accel/accel.sh@20 -- # IFS=: 00:06:25.998 04:03:39 -- accel/accel.sh@20 -- # read -r var val 00:06:25.998 04:03:39 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:06:25.998 04:03:39 -- accel/accel.sh@12 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:06:25.998 04:03:39 -- accel/accel.sh@12 -- # build_accel_config 00:06:25.998 04:03:40 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:25.998 04:03:40 -- accel/accel.sh@33 -- # [[ 1 -gt 0 ]] 00:06:25.998 04:03:40 -- accel/accel.sh@33 -- # accel_json_cfg+=('{"method": "dsa_scan_accel_module"}') 00:06:25.998 04:03:40 -- accel/accel.sh@34 -- # [[ 1 -gt 0 ]] 00:06:25.998 04:03:40 -- accel/accel.sh@34 -- # accel_json_cfg+=('{"method": "iaa_scan_accel_module"}') 00:06:25.998 04:03:40 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:25.998 04:03:40 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:25.998 04:03:40 -- accel/accel.sh@41 -- # local IFS=, 00:06:25.998 04:03:40 -- accel/accel.sh@42 -- # jq -r . 00:06:25.998 [2024-05-14 04:03:40.035737] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:06:25.998 [2024-05-14 04:03:40.035859] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3817236 ] 00:06:25.998 EAL: No free 2048 kB hugepages reported on node 1 00:06:25.998 [2024-05-14 04:03:40.133768] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:25.998 [2024-05-14 04:03:40.224584] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:25.998 [2024-05-14 04:03:40.229109] accel_dsa_rpc.c: 50:rpc_dsa_scan_accel_module: *NOTICE*: Enabled DSA user-mode 00:06:25.998 [2024-05-14 04:03:40.237089] accel_iaa_rpc.c: 33:rpc_iaa_scan_accel_module: *NOTICE*: Enabled IAA user-mode 00:06:32.582 04:03:46 -- accel/accel.sh@21 -- # val= 00:06:32.582 04:03:46 -- accel/accel.sh@22 -- # case "$var" in 00:06:32.582 04:03:46 -- accel/accel.sh@20 -- # IFS=: 00:06:32.582 04:03:46 -- accel/accel.sh@20 -- # read -r var val 00:06:32.582 04:03:46 -- accel/accel.sh@21 -- # val= 00:06:32.582 04:03:46 -- accel/accel.sh@22 -- # case "$var" in 00:06:32.582 04:03:46 -- accel/accel.sh@20 -- # IFS=: 00:06:32.582 04:03:46 -- accel/accel.sh@20 -- # read -r var val 00:06:32.582 04:03:46 -- accel/accel.sh@21 -- # val=0x1 00:06:32.582 04:03:46 -- accel/accel.sh@22 -- # case "$var" in 00:06:32.582 04:03:46 -- accel/accel.sh@20 -- # IFS=: 00:06:32.582 04:03:46 -- accel/accel.sh@20 -- # read -r var val 00:06:32.582 04:03:46 -- accel/accel.sh@21 -- # val= 00:06:32.582 04:03:46 -- accel/accel.sh@22 -- # case "$var" in 00:06:32.582 04:03:46 -- accel/accel.sh@20 -- # IFS=: 00:06:32.582 04:03:46 -- accel/accel.sh@20 -- # read -r var val 00:06:32.582 04:03:46 -- accel/accel.sh@21 -- # val= 00:06:32.582 04:03:46 -- accel/accel.sh@22 -- # case "$var" in 00:06:32.582 04:03:46 -- accel/accel.sh@20 -- # IFS=: 00:06:32.582 04:03:46 -- accel/accel.sh@20 -- # read -r var val 00:06:32.582 04:03:46 -- accel/accel.sh@21 -- # val=crc32c 00:06:32.582 04:03:46 -- accel/accel.sh@22 -- # case "$var" in 00:06:32.582 04:03:46 -- accel/accel.sh@24 -- # accel_opc=crc32c 00:06:32.582 04:03:46 -- accel/accel.sh@20 -- # IFS=: 00:06:32.582 04:03:46 -- accel/accel.sh@20 -- # read -r var val 00:06:32.582 04:03:46 -- accel/accel.sh@21 -- # val=32 00:06:32.582 04:03:46 -- accel/accel.sh@22 -- # case "$var" in 00:06:32.582 04:03:46 -- accel/accel.sh@20 -- # IFS=: 00:06:32.582 04:03:46 -- accel/accel.sh@20 -- # read -r var val 00:06:32.582 04:03:46 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:32.582 04:03:46 -- accel/accel.sh@22 -- # case "$var" in 00:06:32.582 04:03:46 -- accel/accel.sh@20 -- # IFS=: 00:06:32.582 04:03:46 -- accel/accel.sh@20 -- # read -r var val 00:06:32.582 04:03:46 -- accel/accel.sh@21 -- # val= 00:06:32.582 04:03:46 -- accel/accel.sh@22 -- # case "$var" in 00:06:32.582 04:03:46 -- accel/accel.sh@20 -- # IFS=: 00:06:32.582 04:03:46 -- accel/accel.sh@20 -- # read -r var val 00:06:32.582 04:03:46 -- accel/accel.sh@21 -- # val=dsa 00:06:32.582 04:03:46 -- accel/accel.sh@22 -- # case "$var" in 00:06:32.582 04:03:46 -- accel/accel.sh@23 -- # accel_module=dsa 00:06:32.582 04:03:46 -- accel/accel.sh@20 -- # IFS=: 00:06:32.582 04:03:46 -- accel/accel.sh@20 -- # read -r var val 00:06:32.582 04:03:46 -- accel/accel.sh@21 -- # val=32 00:06:32.582 04:03:46 -- accel/accel.sh@22 -- # case "$var" in 00:06:32.582 04:03:46 -- accel/accel.sh@20 -- # IFS=: 00:06:32.582 04:03:46 -- accel/accel.sh@20 -- # read -r var val 00:06:32.582 04:03:46 -- accel/accel.sh@21 -- # val=32 00:06:32.582 04:03:46 -- accel/accel.sh@22 -- # case "$var" in 00:06:32.582 04:03:46 -- accel/accel.sh@20 -- # IFS=: 00:06:32.582 04:03:46 -- accel/accel.sh@20 -- # read -r var val 00:06:32.582 04:03:46 -- accel/accel.sh@21 -- # val=1 00:06:32.582 04:03:46 -- accel/accel.sh@22 -- # case "$var" in 00:06:32.582 04:03:46 -- accel/accel.sh@20 -- # IFS=: 00:06:32.582 04:03:46 -- accel/accel.sh@20 -- # read -r var val 00:06:32.582 04:03:46 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:32.582 04:03:46 -- accel/accel.sh@22 -- # case "$var" in 00:06:32.582 04:03:46 -- accel/accel.sh@20 -- # IFS=: 00:06:32.582 04:03:46 -- accel/accel.sh@20 -- # read -r var val 00:06:32.582 04:03:46 -- accel/accel.sh@21 -- # val=Yes 00:06:32.582 04:03:46 -- accel/accel.sh@22 -- # case "$var" in 00:06:32.582 04:03:46 -- accel/accel.sh@20 -- # IFS=: 00:06:32.582 04:03:46 -- accel/accel.sh@20 -- # read -r var val 00:06:32.582 04:03:46 -- accel/accel.sh@21 -- # val= 00:06:32.582 04:03:46 -- accel/accel.sh@22 -- # case "$var" in 00:06:32.582 04:03:46 -- accel/accel.sh@20 -- # IFS=: 00:06:32.582 04:03:46 -- accel/accel.sh@20 -- # read -r var val 00:06:32.582 04:03:46 -- accel/accel.sh@21 -- # val= 00:06:32.582 04:03:46 -- accel/accel.sh@22 -- # case "$var" in 00:06:32.582 04:03:46 -- accel/accel.sh@20 -- # IFS=: 00:06:32.582 04:03:46 -- accel/accel.sh@20 -- # read -r var val 00:06:35.125 04:03:49 -- accel/accel.sh@21 -- # val= 00:06:35.125 04:03:49 -- accel/accel.sh@22 -- # case "$var" in 00:06:35.125 04:03:49 -- accel/accel.sh@20 -- # IFS=: 00:06:35.125 04:03:49 -- accel/accel.sh@20 -- # read -r var val 00:06:35.125 04:03:49 -- accel/accel.sh@21 -- # val= 00:06:35.125 04:03:49 -- accel/accel.sh@22 -- # case "$var" in 00:06:35.125 04:03:49 -- accel/accel.sh@20 -- # IFS=: 00:06:35.125 04:03:49 -- accel/accel.sh@20 -- # read -r var val 00:06:35.125 04:03:49 -- accel/accel.sh@21 -- # val= 00:06:35.125 04:03:49 -- accel/accel.sh@22 -- # case "$var" in 00:06:35.125 04:03:49 -- accel/accel.sh@20 -- # IFS=: 00:06:35.125 04:03:49 -- accel/accel.sh@20 -- # read -r var val 00:06:35.125 04:03:49 -- accel/accel.sh@21 -- # val= 00:06:35.125 04:03:49 -- accel/accel.sh@22 -- # case "$var" in 00:06:35.125 04:03:49 -- accel/accel.sh@20 -- # IFS=: 00:06:35.125 04:03:49 -- accel/accel.sh@20 -- # read -r var val 00:06:35.125 04:03:49 -- accel/accel.sh@21 -- # val= 00:06:35.125 04:03:49 -- accel/accel.sh@22 -- # case "$var" in 00:06:35.125 04:03:49 -- accel/accel.sh@20 -- # IFS=: 00:06:35.125 04:03:49 -- accel/accel.sh@20 -- # read -r var val 00:06:35.125 04:03:49 -- accel/accel.sh@21 -- # val= 00:06:35.125 04:03:49 -- accel/accel.sh@22 -- # case "$var" in 00:06:35.125 04:03:49 -- accel/accel.sh@20 -- # IFS=: 00:06:35.125 04:03:49 -- accel/accel.sh@20 -- # read -r var val 00:06:35.125 04:03:49 -- accel/accel.sh@28 -- # [[ -n dsa ]] 00:06:35.125 04:03:49 -- accel/accel.sh@28 -- # [[ -n crc32c ]] 00:06:35.125 04:03:49 -- accel/accel.sh@28 -- # [[ dsa == \d\s\a ]] 00:06:35.125 00:06:35.125 real 0m19.394s 00:06:35.125 user 0m6.541s 00:06:35.125 sys 0m0.479s 00:06:35.125 04:03:49 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:35.125 04:03:49 -- common/autotest_common.sh@10 -- # set +x 00:06:35.125 ************************************ 00:06:35.125 END TEST accel_crc32c 00:06:35.125 ************************************ 00:06:35.125 04:03:49 -- accel/accel.sh@94 -- # run_test accel_crc32c_C2 accel_test -t 1 -w crc32c -y -C 2 00:06:35.125 04:03:49 -- common/autotest_common.sh@1077 -- # '[' 9 -le 1 ']' 00:06:35.125 04:03:49 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:35.125 04:03:49 -- common/autotest_common.sh@10 -- # set +x 00:06:35.125 ************************************ 00:06:35.125 START TEST accel_crc32c_C2 00:06:35.125 ************************************ 00:06:35.125 04:03:49 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w crc32c -y -C 2 00:06:35.125 04:03:49 -- accel/accel.sh@16 -- # local accel_opc 00:06:35.125 04:03:49 -- accel/accel.sh@17 -- # local accel_module 00:06:35.125 04:03:49 -- accel/accel.sh@18 -- # accel_perf -t 1 -w crc32c -y -C 2 00:06:35.125 04:03:49 -- accel/accel.sh@12 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:06:35.125 04:03:49 -- accel/accel.sh@12 -- # build_accel_config 00:06:35.125 04:03:49 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:35.125 04:03:49 -- accel/accel.sh@33 -- # [[ 1 -gt 0 ]] 00:06:35.125 04:03:49 -- accel/accel.sh@33 -- # accel_json_cfg+=('{"method": "dsa_scan_accel_module"}') 00:06:35.125 04:03:49 -- accel/accel.sh@34 -- # [[ 1 -gt 0 ]] 00:06:35.125 04:03:49 -- accel/accel.sh@34 -- # accel_json_cfg+=('{"method": "iaa_scan_accel_module"}') 00:06:35.125 04:03:49 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:35.125 04:03:49 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:35.125 04:03:49 -- accel/accel.sh@41 -- # local IFS=, 00:06:35.125 04:03:49 -- accel/accel.sh@42 -- # jq -r . 00:06:35.385 [2024-05-14 04:03:49.727313] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:06:35.385 [2024-05-14 04:03:49.727435] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3819074 ] 00:06:35.385 EAL: No free 2048 kB hugepages reported on node 1 00:06:35.385 [2024-05-14 04:03:49.842858] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:35.385 [2024-05-14 04:03:49.932937] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:35.385 [2024-05-14 04:03:49.937468] accel_dsa_rpc.c: 50:rpc_dsa_scan_accel_module: *NOTICE*: Enabled DSA user-mode 00:06:35.385 [2024-05-14 04:03:49.945482] accel_iaa_rpc.c: 33:rpc_iaa_scan_accel_module: *NOTICE*: Enabled IAA user-mode 00:06:45.379 04:03:59 -- accel/accel.sh@18 -- # out=' 00:06:45.379 SPDK Configuration: 00:06:45.379 Core mask: 0x1 00:06:45.379 00:06:45.379 Accel Perf Configuration: 00:06:45.379 Workload Type: crc32c 00:06:45.379 CRC-32C seed: 0 00:06:45.379 Transfer size: 4096 bytes 00:06:45.379 Vector count 2 00:06:45.379 Module: dsa 00:06:45.379 Queue depth: 32 00:06:45.379 Allocate depth: 32 00:06:45.379 # threads/core: 1 00:06:45.379 Run time: 1 seconds 00:06:45.379 Verify: Yes 00:06:45.379 00:06:45.379 Running for 1 seconds... 00:06:45.379 00:06:45.379 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:45.379 ------------------------------------------------------------------------------------ 00:06:45.379 0,0 252282/s 1970 MiB/s 0 0 00:06:45.379 ==================================================================================== 00:06:45.379 Total 252282/s 985 MiB/s 0 0' 00:06:45.379 04:03:59 -- accel/accel.sh@20 -- # IFS=: 00:06:45.379 04:03:59 -- accel/accel.sh@20 -- # read -r var val 00:06:45.379 04:03:59 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -y -C 2 00:06:45.379 04:03:59 -- accel/accel.sh@12 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:06:45.379 04:03:59 -- accel/accel.sh@12 -- # build_accel_config 00:06:45.379 04:03:59 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:45.379 04:03:59 -- accel/accel.sh@33 -- # [[ 1 -gt 0 ]] 00:06:45.379 04:03:59 -- accel/accel.sh@33 -- # accel_json_cfg+=('{"method": "dsa_scan_accel_module"}') 00:06:45.379 04:03:59 -- accel/accel.sh@34 -- # [[ 1 -gt 0 ]] 00:06:45.379 04:03:59 -- accel/accel.sh@34 -- # accel_json_cfg+=('{"method": "iaa_scan_accel_module"}') 00:06:45.379 04:03:59 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:45.379 04:03:59 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:45.379 04:03:59 -- accel/accel.sh@41 -- # local IFS=, 00:06:45.379 04:03:59 -- accel/accel.sh@42 -- # jq -r . 00:06:45.379 [2024-05-14 04:03:59.395837] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:06:45.379 [2024-05-14 04:03:59.395961] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3820925 ] 00:06:45.379 EAL: No free 2048 kB hugepages reported on node 1 00:06:45.379 [2024-05-14 04:03:59.512902] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:45.379 [2024-05-14 04:03:59.605668] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:45.379 [2024-05-14 04:03:59.610208] accel_dsa_rpc.c: 50:rpc_dsa_scan_accel_module: *NOTICE*: Enabled DSA user-mode 00:06:45.379 [2024-05-14 04:03:59.618191] accel_iaa_rpc.c: 33:rpc_iaa_scan_accel_module: *NOTICE*: Enabled IAA user-mode 00:06:51.953 04:04:06 -- accel/accel.sh@21 -- # val= 00:06:51.953 04:04:06 -- accel/accel.sh@22 -- # case "$var" in 00:06:51.953 04:04:06 -- accel/accel.sh@20 -- # IFS=: 00:06:51.953 04:04:06 -- accel/accel.sh@20 -- # read -r var val 00:06:51.953 04:04:06 -- accel/accel.sh@21 -- # val= 00:06:51.953 04:04:06 -- accel/accel.sh@22 -- # case "$var" in 00:06:51.953 04:04:06 -- accel/accel.sh@20 -- # IFS=: 00:06:51.953 04:04:06 -- accel/accel.sh@20 -- # read -r var val 00:06:51.953 04:04:06 -- accel/accel.sh@21 -- # val=0x1 00:06:51.953 04:04:06 -- accel/accel.sh@22 -- # case "$var" in 00:06:51.953 04:04:06 -- accel/accel.sh@20 -- # IFS=: 00:06:51.953 04:04:06 -- accel/accel.sh@20 -- # read -r var val 00:06:51.953 04:04:06 -- accel/accel.sh@21 -- # val= 00:06:51.953 04:04:06 -- accel/accel.sh@22 -- # case "$var" in 00:06:51.953 04:04:06 -- accel/accel.sh@20 -- # IFS=: 00:06:51.953 04:04:06 -- accel/accel.sh@20 -- # read -r var val 00:06:51.953 04:04:06 -- accel/accel.sh@21 -- # val= 00:06:51.953 04:04:06 -- accel/accel.sh@22 -- # case "$var" in 00:06:51.953 04:04:06 -- accel/accel.sh@20 -- # IFS=: 00:06:51.953 04:04:06 -- accel/accel.sh@20 -- # read -r var val 00:06:51.953 04:04:06 -- accel/accel.sh@21 -- # val=crc32c 00:06:51.953 04:04:06 -- accel/accel.sh@22 -- # case "$var" in 00:06:51.953 04:04:06 -- accel/accel.sh@24 -- # accel_opc=crc32c 00:06:51.953 04:04:06 -- accel/accel.sh@20 -- # IFS=: 00:06:51.953 04:04:06 -- accel/accel.sh@20 -- # read -r var val 00:06:51.953 04:04:06 -- accel/accel.sh@21 -- # val=0 00:06:51.953 04:04:06 -- accel/accel.sh@22 -- # case "$var" in 00:06:51.953 04:04:06 -- accel/accel.sh@20 -- # IFS=: 00:06:51.953 04:04:06 -- accel/accel.sh@20 -- # read -r var val 00:06:51.953 04:04:06 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:51.953 04:04:06 -- accel/accel.sh@22 -- # case "$var" in 00:06:51.953 04:04:06 -- accel/accel.sh@20 -- # IFS=: 00:06:51.953 04:04:06 -- accel/accel.sh@20 -- # read -r var val 00:06:51.953 04:04:06 -- accel/accel.sh@21 -- # val= 00:06:51.953 04:04:06 -- accel/accel.sh@22 -- # case "$var" in 00:06:51.953 04:04:06 -- accel/accel.sh@20 -- # IFS=: 00:06:51.953 04:04:06 -- accel/accel.sh@20 -- # read -r var val 00:06:51.953 04:04:06 -- accel/accel.sh@21 -- # val=dsa 00:06:51.953 04:04:06 -- accel/accel.sh@22 -- # case "$var" in 00:06:51.953 04:04:06 -- accel/accel.sh@23 -- # accel_module=dsa 00:06:51.953 04:04:06 -- accel/accel.sh@20 -- # IFS=: 00:06:51.953 04:04:06 -- accel/accel.sh@20 -- # read -r var val 00:06:51.953 04:04:06 -- accel/accel.sh@21 -- # val=32 00:06:51.953 04:04:06 -- accel/accel.sh@22 -- # case "$var" in 00:06:51.953 04:04:06 -- accel/accel.sh@20 -- # IFS=: 00:06:51.953 04:04:06 -- accel/accel.sh@20 -- # read -r var val 00:06:51.953 04:04:06 -- accel/accel.sh@21 -- # val=32 00:06:51.953 04:04:06 -- accel/accel.sh@22 -- # case "$var" in 00:06:51.953 04:04:06 -- accel/accel.sh@20 -- # IFS=: 00:06:51.953 04:04:06 -- accel/accel.sh@20 -- # read -r var val 00:06:51.953 04:04:06 -- accel/accel.sh@21 -- # val=1 00:06:51.953 04:04:06 -- accel/accel.sh@22 -- # case "$var" in 00:06:51.953 04:04:06 -- accel/accel.sh@20 -- # IFS=: 00:06:51.953 04:04:06 -- accel/accel.sh@20 -- # read -r var val 00:06:51.953 04:04:06 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:51.953 04:04:06 -- accel/accel.sh@22 -- # case "$var" in 00:06:51.953 04:04:06 -- accel/accel.sh@20 -- # IFS=: 00:06:51.953 04:04:06 -- accel/accel.sh@20 -- # read -r var val 00:06:51.953 04:04:06 -- accel/accel.sh@21 -- # val=Yes 00:06:51.953 04:04:06 -- accel/accel.sh@22 -- # case "$var" in 00:06:51.953 04:04:06 -- accel/accel.sh@20 -- # IFS=: 00:06:51.953 04:04:06 -- accel/accel.sh@20 -- # read -r var val 00:06:51.953 04:04:06 -- accel/accel.sh@21 -- # val= 00:06:51.953 04:04:06 -- accel/accel.sh@22 -- # case "$var" in 00:06:51.953 04:04:06 -- accel/accel.sh@20 -- # IFS=: 00:06:51.953 04:04:06 -- accel/accel.sh@20 -- # read -r var val 00:06:51.953 04:04:06 -- accel/accel.sh@21 -- # val= 00:06:51.953 04:04:06 -- accel/accel.sh@22 -- # case "$var" in 00:06:51.953 04:04:06 -- accel/accel.sh@20 -- # IFS=: 00:06:51.953 04:04:06 -- accel/accel.sh@20 -- # read -r var val 00:06:54.492 04:04:09 -- accel/accel.sh@21 -- # val= 00:06:54.492 04:04:09 -- accel/accel.sh@22 -- # case "$var" in 00:06:54.492 04:04:09 -- accel/accel.sh@20 -- # IFS=: 00:06:54.492 04:04:09 -- accel/accel.sh@20 -- # read -r var val 00:06:54.492 04:04:09 -- accel/accel.sh@21 -- # val= 00:06:54.492 04:04:09 -- accel/accel.sh@22 -- # case "$var" in 00:06:54.492 04:04:09 -- accel/accel.sh@20 -- # IFS=: 00:06:54.492 04:04:09 -- accel/accel.sh@20 -- # read -r var val 00:06:54.492 04:04:09 -- accel/accel.sh@21 -- # val= 00:06:54.492 04:04:09 -- accel/accel.sh@22 -- # case "$var" in 00:06:54.492 04:04:09 -- accel/accel.sh@20 -- # IFS=: 00:06:54.492 04:04:09 -- accel/accel.sh@20 -- # read -r var val 00:06:54.492 04:04:09 -- accel/accel.sh@21 -- # val= 00:06:54.492 04:04:09 -- accel/accel.sh@22 -- # case "$var" in 00:06:54.492 04:04:09 -- accel/accel.sh@20 -- # IFS=: 00:06:54.492 04:04:09 -- accel/accel.sh@20 -- # read -r var val 00:06:54.492 04:04:09 -- accel/accel.sh@21 -- # val= 00:06:54.492 04:04:09 -- accel/accel.sh@22 -- # case "$var" in 00:06:54.492 04:04:09 -- accel/accel.sh@20 -- # IFS=: 00:06:54.492 04:04:09 -- accel/accel.sh@20 -- # read -r var val 00:06:54.492 04:04:09 -- accel/accel.sh@21 -- # val= 00:06:54.492 04:04:09 -- accel/accel.sh@22 -- # case "$var" in 00:06:54.492 04:04:09 -- accel/accel.sh@20 -- # IFS=: 00:06:54.493 04:04:09 -- accel/accel.sh@20 -- # read -r var val 00:06:54.493 04:04:09 -- accel/accel.sh@28 -- # [[ -n dsa ]] 00:06:54.493 04:04:09 -- accel/accel.sh@28 -- # [[ -n crc32c ]] 00:06:54.493 04:04:09 -- accel/accel.sh@28 -- # [[ dsa == \d\s\a ]] 00:06:54.493 00:06:54.493 real 0m19.352s 00:06:54.493 user 0m6.516s 00:06:54.493 sys 0m0.460s 00:06:54.493 04:04:09 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:54.493 04:04:09 -- common/autotest_common.sh@10 -- # set +x 00:06:54.493 ************************************ 00:06:54.493 END TEST accel_crc32c_C2 00:06:54.493 ************************************ 00:06:54.493 04:04:09 -- accel/accel.sh@95 -- # run_test accel_copy accel_test -t 1 -w copy -y 00:06:54.493 04:04:09 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:06:54.493 04:04:09 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:54.493 04:04:09 -- common/autotest_common.sh@10 -- # set +x 00:06:54.493 ************************************ 00:06:54.493 START TEST accel_copy 00:06:54.493 ************************************ 00:06:54.493 04:04:09 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w copy -y 00:06:54.493 04:04:09 -- accel/accel.sh@16 -- # local accel_opc 00:06:54.493 04:04:09 -- accel/accel.sh@17 -- # local accel_module 00:06:54.493 04:04:09 -- accel/accel.sh@18 -- # accel_perf -t 1 -w copy -y 00:06:54.493 04:04:09 -- accel/accel.sh@12 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:06:54.493 04:04:09 -- accel/accel.sh@12 -- # build_accel_config 00:06:54.493 04:04:09 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:54.493 04:04:09 -- accel/accel.sh@33 -- # [[ 1 -gt 0 ]] 00:06:54.493 04:04:09 -- accel/accel.sh@33 -- # accel_json_cfg+=('{"method": "dsa_scan_accel_module"}') 00:06:54.493 04:04:09 -- accel/accel.sh@34 -- # [[ 1 -gt 0 ]] 00:06:54.493 04:04:09 -- accel/accel.sh@34 -- # accel_json_cfg+=('{"method": "iaa_scan_accel_module"}') 00:06:54.493 04:04:09 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:54.493 04:04:09 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:54.493 04:04:09 -- accel/accel.sh@41 -- # local IFS=, 00:06:54.493 04:04:09 -- accel/accel.sh@42 -- # jq -r . 00:06:54.769 [2024-05-14 04:04:09.109097] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:06:54.769 [2024-05-14 04:04:09.109266] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3823033 ] 00:06:54.769 EAL: No free 2048 kB hugepages reported on node 1 00:06:54.769 [2024-05-14 04:04:09.222345] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:54.769 [2024-05-14 04:04:09.312549] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:54.769 [2024-05-14 04:04:09.317060] accel_dsa_rpc.c: 50:rpc_dsa_scan_accel_module: *NOTICE*: Enabled DSA user-mode 00:06:54.769 [2024-05-14 04:04:09.325044] accel_iaa_rpc.c: 33:rpc_iaa_scan_accel_module: *NOTICE*: Enabled IAA user-mode 00:07:04.795 04:04:18 -- accel/accel.sh@18 -- # out=' 00:07:04.795 SPDK Configuration: 00:07:04.795 Core mask: 0x1 00:07:04.795 00:07:04.795 Accel Perf Configuration: 00:07:04.795 Workload Type: copy 00:07:04.795 Transfer size: 4096 bytes 00:07:04.795 Vector count 1 00:07:04.795 Module: dsa 00:07:04.795 Queue depth: 32 00:07:04.795 Allocate depth: 32 00:07:04.795 # threads/core: 1 00:07:04.795 Run time: 1 seconds 00:07:04.795 Verify: Yes 00:07:04.795 00:07:04.795 Running for 1 seconds... 00:07:04.795 00:07:04.795 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:04.795 ------------------------------------------------------------------------------------ 00:07:04.795 0,0 220832/s 862 MiB/s 0 0 00:07:04.795 ==================================================================================== 00:07:04.795 Total 220832/s 862 MiB/s 0 0' 00:07:04.795 04:04:18 -- accel/accel.sh@20 -- # IFS=: 00:07:04.795 04:04:18 -- accel/accel.sh@20 -- # read -r var val 00:07:04.795 04:04:18 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy -y 00:07:04.795 04:04:18 -- accel/accel.sh@12 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:07:04.795 04:04:18 -- accel/accel.sh@12 -- # build_accel_config 00:07:04.795 04:04:18 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:04.795 04:04:18 -- accel/accel.sh@33 -- # [[ 1 -gt 0 ]] 00:07:04.795 04:04:18 -- accel/accel.sh@33 -- # accel_json_cfg+=('{"method": "dsa_scan_accel_module"}') 00:07:04.795 04:04:18 -- accel/accel.sh@34 -- # [[ 1 -gt 0 ]] 00:07:04.795 04:04:18 -- accel/accel.sh@34 -- # accel_json_cfg+=('{"method": "iaa_scan_accel_module"}') 00:07:04.795 04:04:18 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:04.795 04:04:18 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:04.795 04:04:18 -- accel/accel.sh@41 -- # local IFS=, 00:07:04.795 04:04:18 -- accel/accel.sh@42 -- # jq -r . 00:07:04.795 [2024-05-14 04:04:18.750794] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:07:04.795 [2024-05-14 04:04:18.750918] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3824853 ] 00:07:04.795 EAL: No free 2048 kB hugepages reported on node 1 00:07:04.795 [2024-05-14 04:04:18.865124] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:04.795 [2024-05-14 04:04:18.955456] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:04.795 [2024-05-14 04:04:18.959996] accel_dsa_rpc.c: 50:rpc_dsa_scan_accel_module: *NOTICE*: Enabled DSA user-mode 00:07:04.795 [2024-05-14 04:04:18.967982] accel_iaa_rpc.c: 33:rpc_iaa_scan_accel_module: *NOTICE*: Enabled IAA user-mode 00:07:11.379 04:04:25 -- accel/accel.sh@21 -- # val= 00:07:11.379 04:04:25 -- accel/accel.sh@22 -- # case "$var" in 00:07:11.379 04:04:25 -- accel/accel.sh@20 -- # IFS=: 00:07:11.379 04:04:25 -- accel/accel.sh@20 -- # read -r var val 00:07:11.379 04:04:25 -- accel/accel.sh@21 -- # val= 00:07:11.379 04:04:25 -- accel/accel.sh@22 -- # case "$var" in 00:07:11.379 04:04:25 -- accel/accel.sh@20 -- # IFS=: 00:07:11.379 04:04:25 -- accel/accel.sh@20 -- # read -r var val 00:07:11.379 04:04:25 -- accel/accel.sh@21 -- # val=0x1 00:07:11.379 04:04:25 -- accel/accel.sh@22 -- # case "$var" in 00:07:11.379 04:04:25 -- accel/accel.sh@20 -- # IFS=: 00:07:11.379 04:04:25 -- accel/accel.sh@20 -- # read -r var val 00:07:11.379 04:04:25 -- accel/accel.sh@21 -- # val= 00:07:11.379 04:04:25 -- accel/accel.sh@22 -- # case "$var" in 00:07:11.379 04:04:25 -- accel/accel.sh@20 -- # IFS=: 00:07:11.379 04:04:25 -- accel/accel.sh@20 -- # read -r var val 00:07:11.379 04:04:25 -- accel/accel.sh@21 -- # val= 00:07:11.379 04:04:25 -- accel/accel.sh@22 -- # case "$var" in 00:07:11.379 04:04:25 -- accel/accel.sh@20 -- # IFS=: 00:07:11.379 04:04:25 -- accel/accel.sh@20 -- # read -r var val 00:07:11.379 04:04:25 -- accel/accel.sh@21 -- # val=copy 00:07:11.379 04:04:25 -- accel/accel.sh@22 -- # case "$var" in 00:07:11.379 04:04:25 -- accel/accel.sh@24 -- # accel_opc=copy 00:07:11.379 04:04:25 -- accel/accel.sh@20 -- # IFS=: 00:07:11.379 04:04:25 -- accel/accel.sh@20 -- # read -r var val 00:07:11.379 04:04:25 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:11.379 04:04:25 -- accel/accel.sh@22 -- # case "$var" in 00:07:11.379 04:04:25 -- accel/accel.sh@20 -- # IFS=: 00:07:11.379 04:04:25 -- accel/accel.sh@20 -- # read -r var val 00:07:11.379 04:04:25 -- accel/accel.sh@21 -- # val= 00:07:11.379 04:04:25 -- accel/accel.sh@22 -- # case "$var" in 00:07:11.379 04:04:25 -- accel/accel.sh@20 -- # IFS=: 00:07:11.379 04:04:25 -- accel/accel.sh@20 -- # read -r var val 00:07:11.379 04:04:25 -- accel/accel.sh@21 -- # val=dsa 00:07:11.379 04:04:25 -- accel/accel.sh@22 -- # case "$var" in 00:07:11.379 04:04:25 -- accel/accel.sh@23 -- # accel_module=dsa 00:07:11.379 04:04:25 -- accel/accel.sh@20 -- # IFS=: 00:07:11.379 04:04:25 -- accel/accel.sh@20 -- # read -r var val 00:07:11.379 04:04:25 -- accel/accel.sh@21 -- # val=32 00:07:11.379 04:04:25 -- accel/accel.sh@22 -- # case "$var" in 00:07:11.379 04:04:25 -- accel/accel.sh@20 -- # IFS=: 00:07:11.379 04:04:25 -- accel/accel.sh@20 -- # read -r var val 00:07:11.379 04:04:25 -- accel/accel.sh@21 -- # val=32 00:07:11.379 04:04:25 -- accel/accel.sh@22 -- # case "$var" in 00:07:11.379 04:04:25 -- accel/accel.sh@20 -- # IFS=: 00:07:11.379 04:04:25 -- accel/accel.sh@20 -- # read -r var val 00:07:11.379 04:04:25 -- accel/accel.sh@21 -- # val=1 00:07:11.379 04:04:25 -- accel/accel.sh@22 -- # case "$var" in 00:07:11.379 04:04:25 -- accel/accel.sh@20 -- # IFS=: 00:07:11.379 04:04:25 -- accel/accel.sh@20 -- # read -r var val 00:07:11.379 04:04:25 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:11.379 04:04:25 -- accel/accel.sh@22 -- # case "$var" in 00:07:11.379 04:04:25 -- accel/accel.sh@20 -- # IFS=: 00:07:11.379 04:04:25 -- accel/accel.sh@20 -- # read -r var val 00:07:11.379 04:04:25 -- accel/accel.sh@21 -- # val=Yes 00:07:11.379 04:04:25 -- accel/accel.sh@22 -- # case "$var" in 00:07:11.379 04:04:25 -- accel/accel.sh@20 -- # IFS=: 00:07:11.379 04:04:25 -- accel/accel.sh@20 -- # read -r var val 00:07:11.379 04:04:25 -- accel/accel.sh@21 -- # val= 00:07:11.379 04:04:25 -- accel/accel.sh@22 -- # case "$var" in 00:07:11.379 04:04:25 -- accel/accel.sh@20 -- # IFS=: 00:07:11.379 04:04:25 -- accel/accel.sh@20 -- # read -r var val 00:07:11.379 04:04:25 -- accel/accel.sh@21 -- # val= 00:07:11.379 04:04:25 -- accel/accel.sh@22 -- # case "$var" in 00:07:11.379 04:04:25 -- accel/accel.sh@20 -- # IFS=: 00:07:11.379 04:04:25 -- accel/accel.sh@20 -- # read -r var val 00:07:13.922 04:04:28 -- accel/accel.sh@21 -- # val= 00:07:13.922 04:04:28 -- accel/accel.sh@22 -- # case "$var" in 00:07:13.922 04:04:28 -- accel/accel.sh@20 -- # IFS=: 00:07:13.922 04:04:28 -- accel/accel.sh@20 -- # read -r var val 00:07:13.922 04:04:28 -- accel/accel.sh@21 -- # val= 00:07:13.922 04:04:28 -- accel/accel.sh@22 -- # case "$var" in 00:07:13.922 04:04:28 -- accel/accel.sh@20 -- # IFS=: 00:07:13.922 04:04:28 -- accel/accel.sh@20 -- # read -r var val 00:07:13.922 04:04:28 -- accel/accel.sh@21 -- # val= 00:07:13.922 04:04:28 -- accel/accel.sh@22 -- # case "$var" in 00:07:13.922 04:04:28 -- accel/accel.sh@20 -- # IFS=: 00:07:13.922 04:04:28 -- accel/accel.sh@20 -- # read -r var val 00:07:13.922 04:04:28 -- accel/accel.sh@21 -- # val= 00:07:13.922 04:04:28 -- accel/accel.sh@22 -- # case "$var" in 00:07:13.922 04:04:28 -- accel/accel.sh@20 -- # IFS=: 00:07:13.922 04:04:28 -- accel/accel.sh@20 -- # read -r var val 00:07:13.922 04:04:28 -- accel/accel.sh@21 -- # val= 00:07:13.922 04:04:28 -- accel/accel.sh@22 -- # case "$var" in 00:07:13.922 04:04:28 -- accel/accel.sh@20 -- # IFS=: 00:07:13.922 04:04:28 -- accel/accel.sh@20 -- # read -r var val 00:07:13.922 04:04:28 -- accel/accel.sh@21 -- # val= 00:07:13.922 04:04:28 -- accel/accel.sh@22 -- # case "$var" in 00:07:13.922 04:04:28 -- accel/accel.sh@20 -- # IFS=: 00:07:13.922 04:04:28 -- accel/accel.sh@20 -- # read -r var val 00:07:13.922 04:04:28 -- accel/accel.sh@28 -- # [[ -n dsa ]] 00:07:13.922 04:04:28 -- accel/accel.sh@28 -- # [[ -n copy ]] 00:07:13.922 04:04:28 -- accel/accel.sh@28 -- # [[ dsa == \d\s\a ]] 00:07:13.922 00:07:13.922 real 0m19.325s 00:07:13.922 user 0m6.515s 00:07:13.922 sys 0m0.444s 00:07:13.922 04:04:28 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:13.922 04:04:28 -- common/autotest_common.sh@10 -- # set +x 00:07:13.922 ************************************ 00:07:13.922 END TEST accel_copy 00:07:13.922 ************************************ 00:07:13.922 04:04:28 -- accel/accel.sh@96 -- # run_test accel_fill accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:07:13.922 04:04:28 -- common/autotest_common.sh@1077 -- # '[' 13 -le 1 ']' 00:07:13.922 04:04:28 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:13.923 04:04:28 -- common/autotest_common.sh@10 -- # set +x 00:07:13.923 ************************************ 00:07:13.923 START TEST accel_fill 00:07:13.923 ************************************ 00:07:13.923 04:04:28 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:07:13.923 04:04:28 -- accel/accel.sh@16 -- # local accel_opc 00:07:13.923 04:04:28 -- accel/accel.sh@17 -- # local accel_module 00:07:13.923 04:04:28 -- accel/accel.sh@18 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:07:13.923 04:04:28 -- accel/accel.sh@12 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:07:13.923 04:04:28 -- accel/accel.sh@12 -- # build_accel_config 00:07:13.923 04:04:28 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:13.923 04:04:28 -- accel/accel.sh@33 -- # [[ 1 -gt 0 ]] 00:07:13.923 04:04:28 -- accel/accel.sh@33 -- # accel_json_cfg+=('{"method": "dsa_scan_accel_module"}') 00:07:13.923 04:04:28 -- accel/accel.sh@34 -- # [[ 1 -gt 0 ]] 00:07:13.923 04:04:28 -- accel/accel.sh@34 -- # accel_json_cfg+=('{"method": "iaa_scan_accel_module"}') 00:07:13.923 04:04:28 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:13.923 04:04:28 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:13.923 04:04:28 -- accel/accel.sh@41 -- # local IFS=, 00:07:13.923 04:04:28 -- accel/accel.sh@42 -- # jq -r . 00:07:13.923 [2024-05-14 04:04:28.468145] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:07:13.923 [2024-05-14 04:04:28.468268] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3826743 ] 00:07:14.227 EAL: No free 2048 kB hugepages reported on node 1 00:07:14.227 [2024-05-14 04:04:28.585182] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:14.227 [2024-05-14 04:04:28.677369] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:14.227 [2024-05-14 04:04:28.681912] accel_dsa_rpc.c: 50:rpc_dsa_scan_accel_module: *NOTICE*: Enabled DSA user-mode 00:07:14.227 [2024-05-14 04:04:28.689898] accel_iaa_rpc.c: 33:rpc_iaa_scan_accel_module: *NOTICE*: Enabled IAA user-mode 00:07:24.220 04:04:38 -- accel/accel.sh@18 -- # out=' 00:07:24.220 SPDK Configuration: 00:07:24.220 Core mask: 0x1 00:07:24.220 00:07:24.220 Accel Perf Configuration: 00:07:24.220 Workload Type: fill 00:07:24.220 Fill pattern: 0x80 00:07:24.220 Transfer size: 4096 bytes 00:07:24.220 Vector count 1 00:07:24.220 Module: dsa 00:07:24.221 Queue depth: 64 00:07:24.221 Allocate depth: 64 00:07:24.221 # threads/core: 1 00:07:24.221 Run time: 1 seconds 00:07:24.221 Verify: Yes 00:07:24.221 00:07:24.221 Running for 1 seconds... 00:07:24.221 00:07:24.221 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:24.221 ------------------------------------------------------------------------------------ 00:07:24.221 0,0 345690/s 1350 MiB/s 0 0 00:07:24.221 ==================================================================================== 00:07:24.221 Total 345690/s 1350 MiB/s 0 0' 00:07:24.221 04:04:38 -- accel/accel.sh@20 -- # IFS=: 00:07:24.221 04:04:38 -- accel/accel.sh@20 -- # read -r var val 00:07:24.221 04:04:38 -- accel/accel.sh@15 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:07:24.221 04:04:38 -- accel/accel.sh@12 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:07:24.221 04:04:38 -- accel/accel.sh@12 -- # build_accel_config 00:07:24.221 04:04:38 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:24.221 04:04:38 -- accel/accel.sh@33 -- # [[ 1 -gt 0 ]] 00:07:24.221 04:04:38 -- accel/accel.sh@33 -- # accel_json_cfg+=('{"method": "dsa_scan_accel_module"}') 00:07:24.221 04:04:38 -- accel/accel.sh@34 -- # [[ 1 -gt 0 ]] 00:07:24.221 04:04:38 -- accel/accel.sh@34 -- # accel_json_cfg+=('{"method": "iaa_scan_accel_module"}') 00:07:24.221 04:04:38 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:24.221 04:04:38 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:24.221 04:04:38 -- accel/accel.sh@41 -- # local IFS=, 00:07:24.221 04:04:38 -- accel/accel.sh@42 -- # jq -r . 00:07:24.221 [2024-05-14 04:04:38.161772] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:07:24.221 [2024-05-14 04:04:38.161894] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3828792 ] 00:07:24.221 EAL: No free 2048 kB hugepages reported on node 1 00:07:24.221 [2024-05-14 04:04:38.277132] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:24.221 [2024-05-14 04:04:38.369749] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:24.221 [2024-05-14 04:04:38.374279] accel_dsa_rpc.c: 50:rpc_dsa_scan_accel_module: *NOTICE*: Enabled DSA user-mode 00:07:24.221 [2024-05-14 04:04:38.382259] accel_iaa_rpc.c: 33:rpc_iaa_scan_accel_module: *NOTICE*: Enabled IAA user-mode 00:07:30.796 04:04:44 -- accel/accel.sh@21 -- # val= 00:07:30.796 04:04:44 -- accel/accel.sh@22 -- # case "$var" in 00:07:30.796 04:04:44 -- accel/accel.sh@20 -- # IFS=: 00:07:30.796 04:04:44 -- accel/accel.sh@20 -- # read -r var val 00:07:30.796 04:04:44 -- accel/accel.sh@21 -- # val= 00:07:30.796 04:04:44 -- accel/accel.sh@22 -- # case "$var" in 00:07:30.796 04:04:44 -- accel/accel.sh@20 -- # IFS=: 00:07:30.796 04:04:44 -- accel/accel.sh@20 -- # read -r var val 00:07:30.796 04:04:44 -- accel/accel.sh@21 -- # val=0x1 00:07:30.796 04:04:44 -- accel/accel.sh@22 -- # case "$var" in 00:07:30.797 04:04:44 -- accel/accel.sh@20 -- # IFS=: 00:07:30.797 04:04:44 -- accel/accel.sh@20 -- # read -r var val 00:07:30.797 04:04:44 -- accel/accel.sh@21 -- # val= 00:07:30.797 04:04:44 -- accel/accel.sh@22 -- # case "$var" in 00:07:30.797 04:04:44 -- accel/accel.sh@20 -- # IFS=: 00:07:30.797 04:04:44 -- accel/accel.sh@20 -- # read -r var val 00:07:30.797 04:04:44 -- accel/accel.sh@21 -- # val= 00:07:30.797 04:04:44 -- accel/accel.sh@22 -- # case "$var" in 00:07:30.797 04:04:44 -- accel/accel.sh@20 -- # IFS=: 00:07:30.797 04:04:44 -- accel/accel.sh@20 -- # read -r var val 00:07:30.797 04:04:44 -- accel/accel.sh@21 -- # val=fill 00:07:30.797 04:04:44 -- accel/accel.sh@22 -- # case "$var" in 00:07:30.797 04:04:44 -- accel/accel.sh@24 -- # accel_opc=fill 00:07:30.797 04:04:44 -- accel/accel.sh@20 -- # IFS=: 00:07:30.797 04:04:44 -- accel/accel.sh@20 -- # read -r var val 00:07:30.797 04:04:44 -- accel/accel.sh@21 -- # val=0x80 00:07:30.797 04:04:44 -- accel/accel.sh@22 -- # case "$var" in 00:07:30.797 04:04:44 -- accel/accel.sh@20 -- # IFS=: 00:07:30.797 04:04:44 -- accel/accel.sh@20 -- # read -r var val 00:07:30.797 04:04:44 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:30.797 04:04:44 -- accel/accel.sh@22 -- # case "$var" in 00:07:30.797 04:04:44 -- accel/accel.sh@20 -- # IFS=: 00:07:30.797 04:04:44 -- accel/accel.sh@20 -- # read -r var val 00:07:30.797 04:04:44 -- accel/accel.sh@21 -- # val= 00:07:30.797 04:04:44 -- accel/accel.sh@22 -- # case "$var" in 00:07:30.797 04:04:44 -- accel/accel.sh@20 -- # IFS=: 00:07:30.797 04:04:44 -- accel/accel.sh@20 -- # read -r var val 00:07:30.797 04:04:44 -- accel/accel.sh@21 -- # val=dsa 00:07:30.797 04:04:44 -- accel/accel.sh@22 -- # case "$var" in 00:07:30.797 04:04:44 -- accel/accel.sh@23 -- # accel_module=dsa 00:07:30.797 04:04:44 -- accel/accel.sh@20 -- # IFS=: 00:07:30.797 04:04:44 -- accel/accel.sh@20 -- # read -r var val 00:07:30.797 04:04:44 -- accel/accel.sh@21 -- # val=64 00:07:30.797 04:04:44 -- accel/accel.sh@22 -- # case "$var" in 00:07:30.797 04:04:44 -- accel/accel.sh@20 -- # IFS=: 00:07:30.797 04:04:44 -- accel/accel.sh@20 -- # read -r var val 00:07:30.797 04:04:44 -- accel/accel.sh@21 -- # val=64 00:07:30.797 04:04:44 -- accel/accel.sh@22 -- # case "$var" in 00:07:30.797 04:04:44 -- accel/accel.sh@20 -- # IFS=: 00:07:30.797 04:04:44 -- accel/accel.sh@20 -- # read -r var val 00:07:30.797 04:04:44 -- accel/accel.sh@21 -- # val=1 00:07:30.797 04:04:44 -- accel/accel.sh@22 -- # case "$var" in 00:07:30.797 04:04:44 -- accel/accel.sh@20 -- # IFS=: 00:07:30.797 04:04:44 -- accel/accel.sh@20 -- # read -r var val 00:07:30.797 04:04:44 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:30.797 04:04:44 -- accel/accel.sh@22 -- # case "$var" in 00:07:30.797 04:04:44 -- accel/accel.sh@20 -- # IFS=: 00:07:30.797 04:04:44 -- accel/accel.sh@20 -- # read -r var val 00:07:30.797 04:04:44 -- accel/accel.sh@21 -- # val=Yes 00:07:30.797 04:04:44 -- accel/accel.sh@22 -- # case "$var" in 00:07:30.797 04:04:44 -- accel/accel.sh@20 -- # IFS=: 00:07:30.797 04:04:44 -- accel/accel.sh@20 -- # read -r var val 00:07:30.797 04:04:44 -- accel/accel.sh@21 -- # val= 00:07:30.797 04:04:44 -- accel/accel.sh@22 -- # case "$var" in 00:07:30.797 04:04:44 -- accel/accel.sh@20 -- # IFS=: 00:07:30.797 04:04:44 -- accel/accel.sh@20 -- # read -r var val 00:07:30.797 04:04:44 -- accel/accel.sh@21 -- # val= 00:07:30.797 04:04:44 -- accel/accel.sh@22 -- # case "$var" in 00:07:30.797 04:04:44 -- accel/accel.sh@20 -- # IFS=: 00:07:30.797 04:04:44 -- accel/accel.sh@20 -- # read -r var val 00:07:33.331 04:04:47 -- accel/accel.sh@21 -- # val= 00:07:33.331 04:04:47 -- accel/accel.sh@22 -- # case "$var" in 00:07:33.331 04:04:47 -- accel/accel.sh@20 -- # IFS=: 00:07:33.331 04:04:47 -- accel/accel.sh@20 -- # read -r var val 00:07:33.331 04:04:47 -- accel/accel.sh@21 -- # val= 00:07:33.331 04:04:47 -- accel/accel.sh@22 -- # case "$var" in 00:07:33.331 04:04:47 -- accel/accel.sh@20 -- # IFS=: 00:07:33.331 04:04:47 -- accel/accel.sh@20 -- # read -r var val 00:07:33.331 04:04:47 -- accel/accel.sh@21 -- # val= 00:07:33.331 04:04:47 -- accel/accel.sh@22 -- # case "$var" in 00:07:33.331 04:04:47 -- accel/accel.sh@20 -- # IFS=: 00:07:33.331 04:04:47 -- accel/accel.sh@20 -- # read -r var val 00:07:33.331 04:04:47 -- accel/accel.sh@21 -- # val= 00:07:33.331 04:04:47 -- accel/accel.sh@22 -- # case "$var" in 00:07:33.331 04:04:47 -- accel/accel.sh@20 -- # IFS=: 00:07:33.331 04:04:47 -- accel/accel.sh@20 -- # read -r var val 00:07:33.331 04:04:47 -- accel/accel.sh@21 -- # val= 00:07:33.331 04:04:47 -- accel/accel.sh@22 -- # case "$var" in 00:07:33.331 04:04:47 -- accel/accel.sh@20 -- # IFS=: 00:07:33.331 04:04:47 -- accel/accel.sh@20 -- # read -r var val 00:07:33.331 04:04:47 -- accel/accel.sh@21 -- # val= 00:07:33.331 04:04:47 -- accel/accel.sh@22 -- # case "$var" in 00:07:33.331 04:04:47 -- accel/accel.sh@20 -- # IFS=: 00:07:33.331 04:04:47 -- accel/accel.sh@20 -- # read -r var val 00:07:33.331 04:04:47 -- accel/accel.sh@28 -- # [[ -n dsa ]] 00:07:33.331 04:04:47 -- accel/accel.sh@28 -- # [[ -n fill ]] 00:07:33.331 04:04:47 -- accel/accel.sh@28 -- # [[ dsa == \d\s\a ]] 00:07:33.331 00:07:33.331 real 0m19.366s 00:07:33.331 user 0m6.545s 00:07:33.331 sys 0m0.470s 00:07:33.331 04:04:47 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:33.331 04:04:47 -- common/autotest_common.sh@10 -- # set +x 00:07:33.331 ************************************ 00:07:33.331 END TEST accel_fill 00:07:33.331 ************************************ 00:07:33.331 04:04:47 -- accel/accel.sh@97 -- # run_test accel_copy_crc32c accel_test -t 1 -w copy_crc32c -y 00:07:33.331 04:04:47 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:07:33.331 04:04:47 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:33.331 04:04:47 -- common/autotest_common.sh@10 -- # set +x 00:07:33.331 ************************************ 00:07:33.331 START TEST accel_copy_crc32c 00:07:33.331 ************************************ 00:07:33.331 04:04:47 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w copy_crc32c -y 00:07:33.331 04:04:47 -- accel/accel.sh@16 -- # local accel_opc 00:07:33.331 04:04:47 -- accel/accel.sh@17 -- # local accel_module 00:07:33.331 04:04:47 -- accel/accel.sh@18 -- # accel_perf -t 1 -w copy_crc32c -y 00:07:33.331 04:04:47 -- accel/accel.sh@12 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:07:33.331 04:04:47 -- accel/accel.sh@12 -- # build_accel_config 00:07:33.331 04:04:47 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:33.331 04:04:47 -- accel/accel.sh@33 -- # [[ 1 -gt 0 ]] 00:07:33.331 04:04:47 -- accel/accel.sh@33 -- # accel_json_cfg+=('{"method": "dsa_scan_accel_module"}') 00:07:33.331 04:04:47 -- accel/accel.sh@34 -- # [[ 1 -gt 0 ]] 00:07:33.331 04:04:47 -- accel/accel.sh@34 -- # accel_json_cfg+=('{"method": "iaa_scan_accel_module"}') 00:07:33.331 04:04:47 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:33.331 04:04:47 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:33.331 04:04:47 -- accel/accel.sh@41 -- # local IFS=, 00:07:33.331 04:04:47 -- accel/accel.sh@42 -- # jq -r . 00:07:33.331 [2024-05-14 04:04:47.867771] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:07:33.331 [2024-05-14 04:04:47.867887] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3830635 ] 00:07:33.590 EAL: No free 2048 kB hugepages reported on node 1 00:07:33.590 [2024-05-14 04:04:47.979407] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:33.590 [2024-05-14 04:04:48.070541] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:33.590 [2024-05-14 04:04:48.075032] accel_dsa_rpc.c: 50:rpc_dsa_scan_accel_module: *NOTICE*: Enabled DSA user-mode 00:07:33.590 [2024-05-14 04:04:48.083021] accel_iaa_rpc.c: 33:rpc_iaa_scan_accel_module: *NOTICE*: Enabled IAA user-mode 00:07:43.650 04:04:57 -- accel/accel.sh@18 -- # out=' 00:07:43.650 SPDK Configuration: 00:07:43.650 Core mask: 0x1 00:07:43.650 00:07:43.650 Accel Perf Configuration: 00:07:43.650 Workload Type: copy_crc32c 00:07:43.650 CRC-32C seed: 0 00:07:43.650 Vector size: 4096 bytes 00:07:43.650 Transfer size: 4096 bytes 00:07:43.650 Vector count 1 00:07:43.650 Module: dsa 00:07:43.650 Queue depth: 32 00:07:43.650 Allocate depth: 32 00:07:43.650 # threads/core: 1 00:07:43.650 Run time: 1 seconds 00:07:43.650 Verify: Yes 00:07:43.650 00:07:43.650 Running for 1 seconds... 00:07:43.650 00:07:43.650 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:43.650 ------------------------------------------------------------------------------------ 00:07:43.650 0,0 210912/s 823 MiB/s 0 0 00:07:43.650 ==================================================================================== 00:07:43.650 Total 210912/s 823 MiB/s 0 0' 00:07:43.650 04:04:57 -- accel/accel.sh@20 -- # IFS=: 00:07:43.650 04:04:57 -- accel/accel.sh@20 -- # read -r var val 00:07:43.650 04:04:57 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y 00:07:43.650 04:04:57 -- accel/accel.sh@12 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:07:43.650 04:04:57 -- accel/accel.sh@12 -- # build_accel_config 00:07:43.650 04:04:57 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:43.650 04:04:57 -- accel/accel.sh@33 -- # [[ 1 -gt 0 ]] 00:07:43.650 04:04:57 -- accel/accel.sh@33 -- # accel_json_cfg+=('{"method": "dsa_scan_accel_module"}') 00:07:43.650 04:04:57 -- accel/accel.sh@34 -- # [[ 1 -gt 0 ]] 00:07:43.650 04:04:57 -- accel/accel.sh@34 -- # accel_json_cfg+=('{"method": "iaa_scan_accel_module"}') 00:07:43.650 04:04:57 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:43.650 04:04:57 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:43.650 04:04:57 -- accel/accel.sh@41 -- # local IFS=, 00:07:43.650 04:04:57 -- accel/accel.sh@42 -- # jq -r . 00:07:43.650 [2024-05-14 04:04:57.546994] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:07:43.650 [2024-05-14 04:04:57.547122] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3832509 ] 00:07:43.650 EAL: No free 2048 kB hugepages reported on node 1 00:07:43.650 [2024-05-14 04:04:57.651652] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:43.650 [2024-05-14 04:04:57.745092] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:43.650 [2024-05-14 04:04:57.749625] accel_dsa_rpc.c: 50:rpc_dsa_scan_accel_module: *NOTICE*: Enabled DSA user-mode 00:07:43.650 [2024-05-14 04:04:57.757607] accel_iaa_rpc.c: 33:rpc_iaa_scan_accel_module: *NOTICE*: Enabled IAA user-mode 00:07:50.239 04:05:04 -- accel/accel.sh@21 -- # val= 00:07:50.239 04:05:04 -- accel/accel.sh@22 -- # case "$var" in 00:07:50.239 04:05:04 -- accel/accel.sh@20 -- # IFS=: 00:07:50.239 04:05:04 -- accel/accel.sh@20 -- # read -r var val 00:07:50.239 04:05:04 -- accel/accel.sh@21 -- # val= 00:07:50.239 04:05:04 -- accel/accel.sh@22 -- # case "$var" in 00:07:50.239 04:05:04 -- accel/accel.sh@20 -- # IFS=: 00:07:50.239 04:05:04 -- accel/accel.sh@20 -- # read -r var val 00:07:50.239 04:05:04 -- accel/accel.sh@21 -- # val=0x1 00:07:50.239 04:05:04 -- accel/accel.sh@22 -- # case "$var" in 00:07:50.239 04:05:04 -- accel/accel.sh@20 -- # IFS=: 00:07:50.239 04:05:04 -- accel/accel.sh@20 -- # read -r var val 00:07:50.239 04:05:04 -- accel/accel.sh@21 -- # val= 00:07:50.239 04:05:04 -- accel/accel.sh@22 -- # case "$var" in 00:07:50.239 04:05:04 -- accel/accel.sh@20 -- # IFS=: 00:07:50.239 04:05:04 -- accel/accel.sh@20 -- # read -r var val 00:07:50.239 04:05:04 -- accel/accel.sh@21 -- # val= 00:07:50.239 04:05:04 -- accel/accel.sh@22 -- # case "$var" in 00:07:50.239 04:05:04 -- accel/accel.sh@20 -- # IFS=: 00:07:50.239 04:05:04 -- accel/accel.sh@20 -- # read -r var val 00:07:50.239 04:05:04 -- accel/accel.sh@21 -- # val=copy_crc32c 00:07:50.239 04:05:04 -- accel/accel.sh@22 -- # case "$var" in 00:07:50.239 04:05:04 -- accel/accel.sh@24 -- # accel_opc=copy_crc32c 00:07:50.239 04:05:04 -- accel/accel.sh@20 -- # IFS=: 00:07:50.239 04:05:04 -- accel/accel.sh@20 -- # read -r var val 00:07:50.239 04:05:04 -- accel/accel.sh@21 -- # val=0 00:07:50.239 04:05:04 -- accel/accel.sh@22 -- # case "$var" in 00:07:50.239 04:05:04 -- accel/accel.sh@20 -- # IFS=: 00:07:50.239 04:05:04 -- accel/accel.sh@20 -- # read -r var val 00:07:50.239 04:05:04 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:50.239 04:05:04 -- accel/accel.sh@22 -- # case "$var" in 00:07:50.239 04:05:04 -- accel/accel.sh@20 -- # IFS=: 00:07:50.239 04:05:04 -- accel/accel.sh@20 -- # read -r var val 00:07:50.239 04:05:04 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:50.239 04:05:04 -- accel/accel.sh@22 -- # case "$var" in 00:07:50.239 04:05:04 -- accel/accel.sh@20 -- # IFS=: 00:07:50.239 04:05:04 -- accel/accel.sh@20 -- # read -r var val 00:07:50.239 04:05:04 -- accel/accel.sh@21 -- # val= 00:07:50.239 04:05:04 -- accel/accel.sh@22 -- # case "$var" in 00:07:50.239 04:05:04 -- accel/accel.sh@20 -- # IFS=: 00:07:50.239 04:05:04 -- accel/accel.sh@20 -- # read -r var val 00:07:50.239 04:05:04 -- accel/accel.sh@21 -- # val=dsa 00:07:50.239 04:05:04 -- accel/accel.sh@22 -- # case "$var" in 00:07:50.239 04:05:04 -- accel/accel.sh@23 -- # accel_module=dsa 00:07:50.239 04:05:04 -- accel/accel.sh@20 -- # IFS=: 00:07:50.239 04:05:04 -- accel/accel.sh@20 -- # read -r var val 00:07:50.239 04:05:04 -- accel/accel.sh@21 -- # val=32 00:07:50.239 04:05:04 -- accel/accel.sh@22 -- # case "$var" in 00:07:50.239 04:05:04 -- accel/accel.sh@20 -- # IFS=: 00:07:50.239 04:05:04 -- accel/accel.sh@20 -- # read -r var val 00:07:50.239 04:05:04 -- accel/accel.sh@21 -- # val=32 00:07:50.239 04:05:04 -- accel/accel.sh@22 -- # case "$var" in 00:07:50.239 04:05:04 -- accel/accel.sh@20 -- # IFS=: 00:07:50.239 04:05:04 -- accel/accel.sh@20 -- # read -r var val 00:07:50.239 04:05:04 -- accel/accel.sh@21 -- # val=1 00:07:50.239 04:05:04 -- accel/accel.sh@22 -- # case "$var" in 00:07:50.239 04:05:04 -- accel/accel.sh@20 -- # IFS=: 00:07:50.239 04:05:04 -- accel/accel.sh@20 -- # read -r var val 00:07:50.239 04:05:04 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:50.239 04:05:04 -- accel/accel.sh@22 -- # case "$var" in 00:07:50.239 04:05:04 -- accel/accel.sh@20 -- # IFS=: 00:07:50.239 04:05:04 -- accel/accel.sh@20 -- # read -r var val 00:07:50.239 04:05:04 -- accel/accel.sh@21 -- # val=Yes 00:07:50.239 04:05:04 -- accel/accel.sh@22 -- # case "$var" in 00:07:50.239 04:05:04 -- accel/accel.sh@20 -- # IFS=: 00:07:50.239 04:05:04 -- accel/accel.sh@20 -- # read -r var val 00:07:50.239 04:05:04 -- accel/accel.sh@21 -- # val= 00:07:50.239 04:05:04 -- accel/accel.sh@22 -- # case "$var" in 00:07:50.239 04:05:04 -- accel/accel.sh@20 -- # IFS=: 00:07:50.239 04:05:04 -- accel/accel.sh@20 -- # read -r var val 00:07:50.239 04:05:04 -- accel/accel.sh@21 -- # val= 00:07:50.239 04:05:04 -- accel/accel.sh@22 -- # case "$var" in 00:07:50.239 04:05:04 -- accel/accel.sh@20 -- # IFS=: 00:07:50.239 04:05:04 -- accel/accel.sh@20 -- # read -r var val 00:07:52.786 04:05:07 -- accel/accel.sh@21 -- # val= 00:07:52.786 04:05:07 -- accel/accel.sh@22 -- # case "$var" in 00:07:52.786 04:05:07 -- accel/accel.sh@20 -- # IFS=: 00:07:52.786 04:05:07 -- accel/accel.sh@20 -- # read -r var val 00:07:52.786 04:05:07 -- accel/accel.sh@21 -- # val= 00:07:52.786 04:05:07 -- accel/accel.sh@22 -- # case "$var" in 00:07:52.786 04:05:07 -- accel/accel.sh@20 -- # IFS=: 00:07:52.786 04:05:07 -- accel/accel.sh@20 -- # read -r var val 00:07:52.786 04:05:07 -- accel/accel.sh@21 -- # val= 00:07:52.786 04:05:07 -- accel/accel.sh@22 -- # case "$var" in 00:07:52.786 04:05:07 -- accel/accel.sh@20 -- # IFS=: 00:07:52.786 04:05:07 -- accel/accel.sh@20 -- # read -r var val 00:07:52.786 04:05:07 -- accel/accel.sh@21 -- # val= 00:07:52.786 04:05:07 -- accel/accel.sh@22 -- # case "$var" in 00:07:52.786 04:05:07 -- accel/accel.sh@20 -- # IFS=: 00:07:52.786 04:05:07 -- accel/accel.sh@20 -- # read -r var val 00:07:52.786 04:05:07 -- accel/accel.sh@21 -- # val= 00:07:52.786 04:05:07 -- accel/accel.sh@22 -- # case "$var" in 00:07:52.786 04:05:07 -- accel/accel.sh@20 -- # IFS=: 00:07:52.786 04:05:07 -- accel/accel.sh@20 -- # read -r var val 00:07:52.786 04:05:07 -- accel/accel.sh@21 -- # val= 00:07:52.786 04:05:07 -- accel/accel.sh@22 -- # case "$var" in 00:07:52.786 04:05:07 -- accel/accel.sh@20 -- # IFS=: 00:07:52.786 04:05:07 -- accel/accel.sh@20 -- # read -r var val 00:07:52.786 04:05:07 -- accel/accel.sh@28 -- # [[ -n dsa ]] 00:07:52.786 04:05:07 -- accel/accel.sh@28 -- # [[ -n copy_crc32c ]] 00:07:52.786 04:05:07 -- accel/accel.sh@28 -- # [[ dsa == \d\s\a ]] 00:07:52.786 00:07:52.786 real 0m19.360s 00:07:52.786 user 0m6.550s 00:07:52.786 sys 0m0.454s 00:07:52.786 04:05:07 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:52.786 04:05:07 -- common/autotest_common.sh@10 -- # set +x 00:07:52.786 ************************************ 00:07:52.786 END TEST accel_copy_crc32c 00:07:52.786 ************************************ 00:07:52.786 04:05:07 -- accel/accel.sh@98 -- # run_test accel_copy_crc32c_C2 accel_test -t 1 -w copy_crc32c -y -C 2 00:07:52.786 04:05:07 -- common/autotest_common.sh@1077 -- # '[' 9 -le 1 ']' 00:07:52.786 04:05:07 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:52.786 04:05:07 -- common/autotest_common.sh@10 -- # set +x 00:07:52.786 ************************************ 00:07:52.786 START TEST accel_copy_crc32c_C2 00:07:52.786 ************************************ 00:07:52.786 04:05:07 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w copy_crc32c -y -C 2 00:07:52.786 04:05:07 -- accel/accel.sh@16 -- # local accel_opc 00:07:52.786 04:05:07 -- accel/accel.sh@17 -- # local accel_module 00:07:52.786 04:05:07 -- accel/accel.sh@18 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:07:52.786 04:05:07 -- accel/accel.sh@12 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:07:52.786 04:05:07 -- accel/accel.sh@12 -- # build_accel_config 00:07:52.786 04:05:07 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:52.786 04:05:07 -- accel/accel.sh@33 -- # [[ 1 -gt 0 ]] 00:07:52.786 04:05:07 -- accel/accel.sh@33 -- # accel_json_cfg+=('{"method": "dsa_scan_accel_module"}') 00:07:52.786 04:05:07 -- accel/accel.sh@34 -- # [[ 1 -gt 0 ]] 00:07:52.786 04:05:07 -- accel/accel.sh@34 -- # accel_json_cfg+=('{"method": "iaa_scan_accel_module"}') 00:07:52.786 04:05:07 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:52.786 04:05:07 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:52.786 04:05:07 -- accel/accel.sh@41 -- # local IFS=, 00:07:52.786 04:05:07 -- accel/accel.sh@42 -- # jq -r . 00:07:52.786 [2024-05-14 04:05:07.260385] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:07:52.786 [2024-05-14 04:05:07.260506] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3834572 ] 00:07:52.786 EAL: No free 2048 kB hugepages reported on node 1 00:07:52.786 [2024-05-14 04:05:07.364237] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:53.047 [2024-05-14 04:05:07.453709] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:53.047 [2024-05-14 04:05:07.458253] accel_dsa_rpc.c: 50:rpc_dsa_scan_accel_module: *NOTICE*: Enabled DSA user-mode 00:07:53.047 [2024-05-14 04:05:07.466233] accel_iaa_rpc.c: 33:rpc_iaa_scan_accel_module: *NOTICE*: Enabled IAA user-mode 00:08:03.043 04:05:16 -- accel/accel.sh@18 -- # out=' 00:08:03.043 SPDK Configuration: 00:08:03.043 Core mask: 0x1 00:08:03.043 00:08:03.043 Accel Perf Configuration: 00:08:03.043 Workload Type: copy_crc32c 00:08:03.043 CRC-32C seed: 0 00:08:03.043 Vector size: 4096 bytes 00:08:03.043 Transfer size: 8192 bytes 00:08:03.043 Vector count 2 00:08:03.043 Module: dsa 00:08:03.043 Queue depth: 32 00:08:03.043 Allocate depth: 32 00:08:03.043 # threads/core: 1 00:08:03.043 Run time: 1 seconds 00:08:03.043 Verify: Yes 00:08:03.043 00:08:03.043 Running for 1 seconds... 00:08:03.044 00:08:03.044 Core,Thread Transfers Bandwidth Failed Miscompares 00:08:03.044 ------------------------------------------------------------------------------------ 00:08:03.044 0,0 134970/s 1054 MiB/s 0 0 00:08:03.044 ==================================================================================== 00:08:03.044 Total 134970/s 527 MiB/s 0 0' 00:08:03.044 04:05:16 -- accel/accel.sh@20 -- # IFS=: 00:08:03.044 04:05:16 -- accel/accel.sh@20 -- # read -r var val 00:08:03.044 04:05:16 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:08:03.044 04:05:16 -- accel/accel.sh@12 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:08:03.044 04:05:16 -- accel/accel.sh@12 -- # build_accel_config 00:08:03.044 04:05:16 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:08:03.044 04:05:16 -- accel/accel.sh@33 -- # [[ 1 -gt 0 ]] 00:08:03.044 04:05:16 -- accel/accel.sh@33 -- # accel_json_cfg+=('{"method": "dsa_scan_accel_module"}') 00:08:03.044 04:05:16 -- accel/accel.sh@34 -- # [[ 1 -gt 0 ]] 00:08:03.044 04:05:16 -- accel/accel.sh@34 -- # accel_json_cfg+=('{"method": "iaa_scan_accel_module"}') 00:08:03.044 04:05:16 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:08:03.044 04:05:16 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:08:03.044 04:05:16 -- accel/accel.sh@41 -- # local IFS=, 00:08:03.044 04:05:16 -- accel/accel.sh@42 -- # jq -r . 00:08:03.044 [2024-05-14 04:05:16.912830] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:08:03.044 [2024-05-14 04:05:16.912949] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3836409 ] 00:08:03.044 EAL: No free 2048 kB hugepages reported on node 1 00:08:03.044 [2024-05-14 04:05:17.023933] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:03.044 [2024-05-14 04:05:17.113914] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:03.044 [2024-05-14 04:05:17.118402] accel_dsa_rpc.c: 50:rpc_dsa_scan_accel_module: *NOTICE*: Enabled DSA user-mode 00:08:03.044 [2024-05-14 04:05:17.126387] accel_iaa_rpc.c: 33:rpc_iaa_scan_accel_module: *NOTICE*: Enabled IAA user-mode 00:08:09.619 04:05:23 -- accel/accel.sh@21 -- # val= 00:08:09.619 04:05:23 -- accel/accel.sh@22 -- # case "$var" in 00:08:09.619 04:05:23 -- accel/accel.sh@20 -- # IFS=: 00:08:09.619 04:05:23 -- accel/accel.sh@20 -- # read -r var val 00:08:09.619 04:05:23 -- accel/accel.sh@21 -- # val= 00:08:09.619 04:05:23 -- accel/accel.sh@22 -- # case "$var" in 00:08:09.619 04:05:23 -- accel/accel.sh@20 -- # IFS=: 00:08:09.619 04:05:23 -- accel/accel.sh@20 -- # read -r var val 00:08:09.619 04:05:23 -- accel/accel.sh@21 -- # val=0x1 00:08:09.619 04:05:23 -- accel/accel.sh@22 -- # case "$var" in 00:08:09.619 04:05:23 -- accel/accel.sh@20 -- # IFS=: 00:08:09.619 04:05:23 -- accel/accel.sh@20 -- # read -r var val 00:08:09.619 04:05:23 -- accel/accel.sh@21 -- # val= 00:08:09.619 04:05:23 -- accel/accel.sh@22 -- # case "$var" in 00:08:09.619 04:05:23 -- accel/accel.sh@20 -- # IFS=: 00:08:09.619 04:05:23 -- accel/accel.sh@20 -- # read -r var val 00:08:09.619 04:05:23 -- accel/accel.sh@21 -- # val= 00:08:09.619 04:05:23 -- accel/accel.sh@22 -- # case "$var" in 00:08:09.619 04:05:23 -- accel/accel.sh@20 -- # IFS=: 00:08:09.619 04:05:23 -- accel/accel.sh@20 -- # read -r var val 00:08:09.619 04:05:23 -- accel/accel.sh@21 -- # val=copy_crc32c 00:08:09.619 04:05:23 -- accel/accel.sh@22 -- # case "$var" in 00:08:09.619 04:05:23 -- accel/accel.sh@24 -- # accel_opc=copy_crc32c 00:08:09.619 04:05:23 -- accel/accel.sh@20 -- # IFS=: 00:08:09.619 04:05:23 -- accel/accel.sh@20 -- # read -r var val 00:08:09.619 04:05:23 -- accel/accel.sh@21 -- # val=0 00:08:09.619 04:05:23 -- accel/accel.sh@22 -- # case "$var" in 00:08:09.619 04:05:23 -- accel/accel.sh@20 -- # IFS=: 00:08:09.619 04:05:23 -- accel/accel.sh@20 -- # read -r var val 00:08:09.619 04:05:23 -- accel/accel.sh@21 -- # val='4096 bytes' 00:08:09.619 04:05:23 -- accel/accel.sh@22 -- # case "$var" in 00:08:09.619 04:05:23 -- accel/accel.sh@20 -- # IFS=: 00:08:09.619 04:05:23 -- accel/accel.sh@20 -- # read -r var val 00:08:09.619 04:05:23 -- accel/accel.sh@21 -- # val='8192 bytes' 00:08:09.619 04:05:23 -- accel/accel.sh@22 -- # case "$var" in 00:08:09.619 04:05:23 -- accel/accel.sh@20 -- # IFS=: 00:08:09.619 04:05:23 -- accel/accel.sh@20 -- # read -r var val 00:08:09.619 04:05:23 -- accel/accel.sh@21 -- # val= 00:08:09.619 04:05:23 -- accel/accel.sh@22 -- # case "$var" in 00:08:09.619 04:05:23 -- accel/accel.sh@20 -- # IFS=: 00:08:09.619 04:05:23 -- accel/accel.sh@20 -- # read -r var val 00:08:09.619 04:05:23 -- accel/accel.sh@21 -- # val=dsa 00:08:09.619 04:05:23 -- accel/accel.sh@22 -- # case "$var" in 00:08:09.619 04:05:23 -- accel/accel.sh@23 -- # accel_module=dsa 00:08:09.619 04:05:23 -- accel/accel.sh@20 -- # IFS=: 00:08:09.619 04:05:23 -- accel/accel.sh@20 -- # read -r var val 00:08:09.619 04:05:23 -- accel/accel.sh@21 -- # val=32 00:08:09.619 04:05:23 -- accel/accel.sh@22 -- # case "$var" in 00:08:09.619 04:05:23 -- accel/accel.sh@20 -- # IFS=: 00:08:09.619 04:05:23 -- accel/accel.sh@20 -- # read -r var val 00:08:09.619 04:05:23 -- accel/accel.sh@21 -- # val=32 00:08:09.619 04:05:23 -- accel/accel.sh@22 -- # case "$var" in 00:08:09.619 04:05:23 -- accel/accel.sh@20 -- # IFS=: 00:08:09.619 04:05:23 -- accel/accel.sh@20 -- # read -r var val 00:08:09.619 04:05:23 -- accel/accel.sh@21 -- # val=1 00:08:09.619 04:05:23 -- accel/accel.sh@22 -- # case "$var" in 00:08:09.619 04:05:23 -- accel/accel.sh@20 -- # IFS=: 00:08:09.619 04:05:23 -- accel/accel.sh@20 -- # read -r var val 00:08:09.619 04:05:23 -- accel/accel.sh@21 -- # val='1 seconds' 00:08:09.619 04:05:23 -- accel/accel.sh@22 -- # case "$var" in 00:08:09.619 04:05:23 -- accel/accel.sh@20 -- # IFS=: 00:08:09.619 04:05:23 -- accel/accel.sh@20 -- # read -r var val 00:08:09.619 04:05:23 -- accel/accel.sh@21 -- # val=Yes 00:08:09.619 04:05:23 -- accel/accel.sh@22 -- # case "$var" in 00:08:09.619 04:05:23 -- accel/accel.sh@20 -- # IFS=: 00:08:09.619 04:05:23 -- accel/accel.sh@20 -- # read -r var val 00:08:09.619 04:05:23 -- accel/accel.sh@21 -- # val= 00:08:09.619 04:05:23 -- accel/accel.sh@22 -- # case "$var" in 00:08:09.619 04:05:23 -- accel/accel.sh@20 -- # IFS=: 00:08:09.619 04:05:23 -- accel/accel.sh@20 -- # read -r var val 00:08:09.619 04:05:23 -- accel/accel.sh@21 -- # val= 00:08:09.619 04:05:23 -- accel/accel.sh@22 -- # case "$var" in 00:08:09.619 04:05:23 -- accel/accel.sh@20 -- # IFS=: 00:08:09.619 04:05:23 -- accel/accel.sh@20 -- # read -r var val 00:08:12.235 04:05:26 -- accel/accel.sh@21 -- # val= 00:08:12.235 04:05:26 -- accel/accel.sh@22 -- # case "$var" in 00:08:12.235 04:05:26 -- accel/accel.sh@20 -- # IFS=: 00:08:12.235 04:05:26 -- accel/accel.sh@20 -- # read -r var val 00:08:12.235 04:05:26 -- accel/accel.sh@21 -- # val= 00:08:12.235 04:05:26 -- accel/accel.sh@22 -- # case "$var" in 00:08:12.235 04:05:26 -- accel/accel.sh@20 -- # IFS=: 00:08:12.235 04:05:26 -- accel/accel.sh@20 -- # read -r var val 00:08:12.235 04:05:26 -- accel/accel.sh@21 -- # val= 00:08:12.235 04:05:26 -- accel/accel.sh@22 -- # case "$var" in 00:08:12.235 04:05:26 -- accel/accel.sh@20 -- # IFS=: 00:08:12.235 04:05:26 -- accel/accel.sh@20 -- # read -r var val 00:08:12.235 04:05:26 -- accel/accel.sh@21 -- # val= 00:08:12.235 04:05:26 -- accel/accel.sh@22 -- # case "$var" in 00:08:12.235 04:05:26 -- accel/accel.sh@20 -- # IFS=: 00:08:12.235 04:05:26 -- accel/accel.sh@20 -- # read -r var val 00:08:12.235 04:05:26 -- accel/accel.sh@21 -- # val= 00:08:12.235 04:05:26 -- accel/accel.sh@22 -- # case "$var" in 00:08:12.235 04:05:26 -- accel/accel.sh@20 -- # IFS=: 00:08:12.235 04:05:26 -- accel/accel.sh@20 -- # read -r var val 00:08:12.235 04:05:26 -- accel/accel.sh@21 -- # val= 00:08:12.235 04:05:26 -- accel/accel.sh@22 -- # case "$var" in 00:08:12.235 04:05:26 -- accel/accel.sh@20 -- # IFS=: 00:08:12.235 04:05:26 -- accel/accel.sh@20 -- # read -r var val 00:08:12.235 04:05:26 -- accel/accel.sh@28 -- # [[ -n dsa ]] 00:08:12.235 04:05:26 -- accel/accel.sh@28 -- # [[ -n copy_crc32c ]] 00:08:12.235 04:05:26 -- accel/accel.sh@28 -- # [[ dsa == \d\s\a ]] 00:08:12.235 00:08:12.235 real 0m19.310s 00:08:12.235 user 0m6.502s 00:08:12.235 sys 0m0.457s 00:08:12.235 04:05:26 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:12.235 04:05:26 -- common/autotest_common.sh@10 -- # set +x 00:08:12.235 ************************************ 00:08:12.235 END TEST accel_copy_crc32c_C2 00:08:12.235 ************************************ 00:08:12.235 04:05:26 -- accel/accel.sh@99 -- # run_test accel_dualcast accel_test -t 1 -w dualcast -y 00:08:12.235 04:05:26 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:08:12.235 04:05:26 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:12.235 04:05:26 -- common/autotest_common.sh@10 -- # set +x 00:08:12.235 ************************************ 00:08:12.235 START TEST accel_dualcast 00:08:12.235 ************************************ 00:08:12.235 04:05:26 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w dualcast -y 00:08:12.235 04:05:26 -- accel/accel.sh@16 -- # local accel_opc 00:08:12.235 04:05:26 -- accel/accel.sh@17 -- # local accel_module 00:08:12.235 04:05:26 -- accel/accel.sh@18 -- # accel_perf -t 1 -w dualcast -y 00:08:12.235 04:05:26 -- accel/accel.sh@12 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:08:12.235 04:05:26 -- accel/accel.sh@12 -- # build_accel_config 00:08:12.235 04:05:26 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:08:12.235 04:05:26 -- accel/accel.sh@33 -- # [[ 1 -gt 0 ]] 00:08:12.235 04:05:26 -- accel/accel.sh@33 -- # accel_json_cfg+=('{"method": "dsa_scan_accel_module"}') 00:08:12.235 04:05:26 -- accel/accel.sh@34 -- # [[ 1 -gt 0 ]] 00:08:12.235 04:05:26 -- accel/accel.sh@34 -- # accel_json_cfg+=('{"method": "iaa_scan_accel_module"}') 00:08:12.235 04:05:26 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:08:12.235 04:05:26 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:08:12.235 04:05:26 -- accel/accel.sh@41 -- # local IFS=, 00:08:12.235 04:05:26 -- accel/accel.sh@42 -- # jq -r . 00:08:12.235 [2024-05-14 04:05:26.602681] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:08:12.235 [2024-05-14 04:05:26.602797] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3838513 ] 00:08:12.235 EAL: No free 2048 kB hugepages reported on node 1 00:08:12.235 [2024-05-14 04:05:26.713746] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:12.235 [2024-05-14 04:05:26.804075] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:12.235 [2024-05-14 04:05:26.808604] accel_dsa_rpc.c: 50:rpc_dsa_scan_accel_module: *NOTICE*: Enabled DSA user-mode 00:08:12.235 [2024-05-14 04:05:26.816585] accel_iaa_rpc.c: 33:rpc_iaa_scan_accel_module: *NOTICE*: Enabled IAA user-mode 00:08:22.220 04:05:36 -- accel/accel.sh@18 -- # out=' 00:08:22.221 SPDK Configuration: 00:08:22.221 Core mask: 0x1 00:08:22.221 00:08:22.221 Accel Perf Configuration: 00:08:22.221 Workload Type: dualcast 00:08:22.221 Transfer size: 4096 bytes 00:08:22.221 Vector count 1 00:08:22.221 Module: dsa 00:08:22.221 Queue depth: 32 00:08:22.221 Allocate depth: 32 00:08:22.221 # threads/core: 1 00:08:22.221 Run time: 1 seconds 00:08:22.221 Verify: Yes 00:08:22.221 00:08:22.221 Running for 1 seconds... 00:08:22.221 00:08:22.221 Core,Thread Transfers Bandwidth Failed Miscompares 00:08:22.221 ------------------------------------------------------------------------------------ 00:08:22.221 0,0 214656/s 838 MiB/s 0 0 00:08:22.221 ==================================================================================== 00:08:22.221 Total 214656/s 838 MiB/s 0 0' 00:08:22.221 04:05:36 -- accel/accel.sh@20 -- # IFS=: 00:08:22.221 04:05:36 -- accel/accel.sh@20 -- # read -r var val 00:08:22.221 04:05:36 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dualcast -y 00:08:22.221 04:05:36 -- accel/accel.sh@12 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:08:22.221 04:05:36 -- accel/accel.sh@12 -- # build_accel_config 00:08:22.221 04:05:36 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:08:22.221 04:05:36 -- accel/accel.sh@33 -- # [[ 1 -gt 0 ]] 00:08:22.221 04:05:36 -- accel/accel.sh@33 -- # accel_json_cfg+=('{"method": "dsa_scan_accel_module"}') 00:08:22.221 04:05:36 -- accel/accel.sh@34 -- # [[ 1 -gt 0 ]] 00:08:22.221 04:05:36 -- accel/accel.sh@34 -- # accel_json_cfg+=('{"method": "iaa_scan_accel_module"}') 00:08:22.221 04:05:36 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:08:22.221 04:05:36 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:08:22.221 04:05:36 -- accel/accel.sh@41 -- # local IFS=, 00:08:22.221 04:05:36 -- accel/accel.sh@42 -- # jq -r . 00:08:22.221 [2024-05-14 04:05:36.252638] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:08:22.221 [2024-05-14 04:05:36.252752] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3840334 ] 00:08:22.221 EAL: No free 2048 kB hugepages reported on node 1 00:08:22.221 [2024-05-14 04:05:36.365611] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:22.221 [2024-05-14 04:05:36.457405] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:22.221 [2024-05-14 04:05:36.461934] accel_dsa_rpc.c: 50:rpc_dsa_scan_accel_module: *NOTICE*: Enabled DSA user-mode 00:08:22.221 [2024-05-14 04:05:36.469917] accel_iaa_rpc.c: 33:rpc_iaa_scan_accel_module: *NOTICE*: Enabled IAA user-mode 00:08:28.799 04:05:42 -- accel/accel.sh@21 -- # val= 00:08:28.799 04:05:42 -- accel/accel.sh@22 -- # case "$var" in 00:08:28.799 04:05:42 -- accel/accel.sh@20 -- # IFS=: 00:08:28.799 04:05:42 -- accel/accel.sh@20 -- # read -r var val 00:08:28.799 04:05:42 -- accel/accel.sh@21 -- # val= 00:08:28.799 04:05:42 -- accel/accel.sh@22 -- # case "$var" in 00:08:28.799 04:05:42 -- accel/accel.sh@20 -- # IFS=: 00:08:28.799 04:05:42 -- accel/accel.sh@20 -- # read -r var val 00:08:28.799 04:05:42 -- accel/accel.sh@21 -- # val=0x1 00:08:28.799 04:05:42 -- accel/accel.sh@22 -- # case "$var" in 00:08:28.799 04:05:42 -- accel/accel.sh@20 -- # IFS=: 00:08:28.799 04:05:42 -- accel/accel.sh@20 -- # read -r var val 00:08:28.799 04:05:42 -- accel/accel.sh@21 -- # val= 00:08:28.799 04:05:42 -- accel/accel.sh@22 -- # case "$var" in 00:08:28.799 04:05:42 -- accel/accel.sh@20 -- # IFS=: 00:08:28.799 04:05:42 -- accel/accel.sh@20 -- # read -r var val 00:08:28.799 04:05:42 -- accel/accel.sh@21 -- # val= 00:08:28.799 04:05:42 -- accel/accel.sh@22 -- # case "$var" in 00:08:28.799 04:05:42 -- accel/accel.sh@20 -- # IFS=: 00:08:28.799 04:05:42 -- accel/accel.sh@20 -- # read -r var val 00:08:28.799 04:05:42 -- accel/accel.sh@21 -- # val=dualcast 00:08:28.799 04:05:42 -- accel/accel.sh@22 -- # case "$var" in 00:08:28.799 04:05:42 -- accel/accel.sh@24 -- # accel_opc=dualcast 00:08:28.799 04:05:42 -- accel/accel.sh@20 -- # IFS=: 00:08:28.799 04:05:42 -- accel/accel.sh@20 -- # read -r var val 00:08:28.799 04:05:42 -- accel/accel.sh@21 -- # val='4096 bytes' 00:08:28.799 04:05:42 -- accel/accel.sh@22 -- # case "$var" in 00:08:28.799 04:05:42 -- accel/accel.sh@20 -- # IFS=: 00:08:28.799 04:05:42 -- accel/accel.sh@20 -- # read -r var val 00:08:28.799 04:05:42 -- accel/accel.sh@21 -- # val= 00:08:28.799 04:05:42 -- accel/accel.sh@22 -- # case "$var" in 00:08:28.799 04:05:42 -- accel/accel.sh@20 -- # IFS=: 00:08:28.799 04:05:42 -- accel/accel.sh@20 -- # read -r var val 00:08:28.799 04:05:42 -- accel/accel.sh@21 -- # val=dsa 00:08:28.799 04:05:42 -- accel/accel.sh@22 -- # case "$var" in 00:08:28.799 04:05:42 -- accel/accel.sh@23 -- # accel_module=dsa 00:08:28.799 04:05:42 -- accel/accel.sh@20 -- # IFS=: 00:08:28.799 04:05:42 -- accel/accel.sh@20 -- # read -r var val 00:08:28.799 04:05:42 -- accel/accel.sh@21 -- # val=32 00:08:28.799 04:05:42 -- accel/accel.sh@22 -- # case "$var" in 00:08:28.799 04:05:42 -- accel/accel.sh@20 -- # IFS=: 00:08:28.799 04:05:42 -- accel/accel.sh@20 -- # read -r var val 00:08:28.799 04:05:42 -- accel/accel.sh@21 -- # val=32 00:08:28.799 04:05:42 -- accel/accel.sh@22 -- # case "$var" in 00:08:28.799 04:05:42 -- accel/accel.sh@20 -- # IFS=: 00:08:28.799 04:05:42 -- accel/accel.sh@20 -- # read -r var val 00:08:28.799 04:05:42 -- accel/accel.sh@21 -- # val=1 00:08:28.799 04:05:42 -- accel/accel.sh@22 -- # case "$var" in 00:08:28.799 04:05:42 -- accel/accel.sh@20 -- # IFS=: 00:08:28.799 04:05:42 -- accel/accel.sh@20 -- # read -r var val 00:08:28.799 04:05:42 -- accel/accel.sh@21 -- # val='1 seconds' 00:08:28.799 04:05:42 -- accel/accel.sh@22 -- # case "$var" in 00:08:28.799 04:05:42 -- accel/accel.sh@20 -- # IFS=: 00:08:28.799 04:05:42 -- accel/accel.sh@20 -- # read -r var val 00:08:28.799 04:05:42 -- accel/accel.sh@21 -- # val=Yes 00:08:28.799 04:05:42 -- accel/accel.sh@22 -- # case "$var" in 00:08:28.799 04:05:42 -- accel/accel.sh@20 -- # IFS=: 00:08:28.799 04:05:42 -- accel/accel.sh@20 -- # read -r var val 00:08:28.799 04:05:42 -- accel/accel.sh@21 -- # val= 00:08:28.799 04:05:42 -- accel/accel.sh@22 -- # case "$var" in 00:08:28.799 04:05:42 -- accel/accel.sh@20 -- # IFS=: 00:08:28.799 04:05:42 -- accel/accel.sh@20 -- # read -r var val 00:08:28.799 04:05:42 -- accel/accel.sh@21 -- # val= 00:08:28.799 04:05:42 -- accel/accel.sh@22 -- # case "$var" in 00:08:28.799 04:05:42 -- accel/accel.sh@20 -- # IFS=: 00:08:28.799 04:05:42 -- accel/accel.sh@20 -- # read -r var val 00:08:31.349 04:05:45 -- accel/accel.sh@21 -- # val= 00:08:31.349 04:05:45 -- accel/accel.sh@22 -- # case "$var" in 00:08:31.349 04:05:45 -- accel/accel.sh@20 -- # IFS=: 00:08:31.349 04:05:45 -- accel/accel.sh@20 -- # read -r var val 00:08:31.349 04:05:45 -- accel/accel.sh@21 -- # val= 00:08:31.349 04:05:45 -- accel/accel.sh@22 -- # case "$var" in 00:08:31.349 04:05:45 -- accel/accel.sh@20 -- # IFS=: 00:08:31.349 04:05:45 -- accel/accel.sh@20 -- # read -r var val 00:08:31.349 04:05:45 -- accel/accel.sh@21 -- # val= 00:08:31.349 04:05:45 -- accel/accel.sh@22 -- # case "$var" in 00:08:31.349 04:05:45 -- accel/accel.sh@20 -- # IFS=: 00:08:31.349 04:05:45 -- accel/accel.sh@20 -- # read -r var val 00:08:31.349 04:05:45 -- accel/accel.sh@21 -- # val= 00:08:31.349 04:05:45 -- accel/accel.sh@22 -- # case "$var" in 00:08:31.349 04:05:45 -- accel/accel.sh@20 -- # IFS=: 00:08:31.349 04:05:45 -- accel/accel.sh@20 -- # read -r var val 00:08:31.349 04:05:45 -- accel/accel.sh@21 -- # val= 00:08:31.349 04:05:45 -- accel/accel.sh@22 -- # case "$var" in 00:08:31.349 04:05:45 -- accel/accel.sh@20 -- # IFS=: 00:08:31.349 04:05:45 -- accel/accel.sh@20 -- # read -r var val 00:08:31.349 04:05:45 -- accel/accel.sh@21 -- # val= 00:08:31.349 04:05:45 -- accel/accel.sh@22 -- # case "$var" in 00:08:31.349 04:05:45 -- accel/accel.sh@20 -- # IFS=: 00:08:31.349 04:05:45 -- accel/accel.sh@20 -- # read -r var val 00:08:31.349 04:05:45 -- accel/accel.sh@28 -- # [[ -n dsa ]] 00:08:31.349 04:05:45 -- accel/accel.sh@28 -- # [[ -n dualcast ]] 00:08:31.349 04:05:45 -- accel/accel.sh@28 -- # [[ dsa == \d\s\a ]] 00:08:31.349 00:08:31.349 real 0m19.333s 00:08:31.349 user 0m6.512s 00:08:31.349 sys 0m0.438s 00:08:31.349 04:05:45 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:31.349 04:05:45 -- common/autotest_common.sh@10 -- # set +x 00:08:31.349 ************************************ 00:08:31.349 END TEST accel_dualcast 00:08:31.349 ************************************ 00:08:31.349 04:05:45 -- accel/accel.sh@100 -- # run_test accel_compare accel_test -t 1 -w compare -y 00:08:31.349 04:05:45 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:08:31.349 04:05:45 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:31.349 04:05:45 -- common/autotest_common.sh@10 -- # set +x 00:08:31.610 ************************************ 00:08:31.610 START TEST accel_compare 00:08:31.610 ************************************ 00:08:31.610 04:05:45 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w compare -y 00:08:31.610 04:05:45 -- accel/accel.sh@16 -- # local accel_opc 00:08:31.610 04:05:45 -- accel/accel.sh@17 -- # local accel_module 00:08:31.610 04:05:45 -- accel/accel.sh@18 -- # accel_perf -t 1 -w compare -y 00:08:31.610 04:05:45 -- accel/accel.sh@12 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:08:31.610 04:05:45 -- accel/accel.sh@12 -- # build_accel_config 00:08:31.610 04:05:45 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:08:31.610 04:05:45 -- accel/accel.sh@33 -- # [[ 1 -gt 0 ]] 00:08:31.610 04:05:45 -- accel/accel.sh@33 -- # accel_json_cfg+=('{"method": "dsa_scan_accel_module"}') 00:08:31.610 04:05:45 -- accel/accel.sh@34 -- # [[ 1 -gt 0 ]] 00:08:31.610 04:05:45 -- accel/accel.sh@34 -- # accel_json_cfg+=('{"method": "iaa_scan_accel_module"}') 00:08:31.610 04:05:45 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:08:31.610 04:05:45 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:08:31.610 04:05:45 -- accel/accel.sh@41 -- # local IFS=, 00:08:31.610 04:05:45 -- accel/accel.sh@42 -- # jq -r . 00:08:31.610 [2024-05-14 04:05:45.973197] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:08:31.610 [2024-05-14 04:05:45.973321] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3842337 ] 00:08:31.610 EAL: No free 2048 kB hugepages reported on node 1 00:08:31.610 [2024-05-14 04:05:46.090054] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:31.610 [2024-05-14 04:05:46.180606] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:31.610 [2024-05-14 04:05:46.185150] accel_dsa_rpc.c: 50:rpc_dsa_scan_accel_module: *NOTICE*: Enabled DSA user-mode 00:08:31.610 [2024-05-14 04:05:46.193124] accel_iaa_rpc.c: 33:rpc_iaa_scan_accel_module: *NOTICE*: Enabled IAA user-mode 00:08:41.611 04:05:55 -- accel/accel.sh@18 -- # out=' 00:08:41.611 SPDK Configuration: 00:08:41.611 Core mask: 0x1 00:08:41.611 00:08:41.611 Accel Perf Configuration: 00:08:41.611 Workload Type: compare 00:08:41.611 Transfer size: 4096 bytes 00:08:41.611 Vector count 1 00:08:41.611 Module: dsa 00:08:41.611 Queue depth: 32 00:08:41.611 Allocate depth: 32 00:08:41.611 # threads/core: 1 00:08:41.611 Run time: 1 seconds 00:08:41.611 Verify: Yes 00:08:41.611 00:08:41.611 Running for 1 seconds... 00:08:41.611 00:08:41.611 Core,Thread Transfers Bandwidth Failed Miscompares 00:08:41.611 ------------------------------------------------------------------------------------ 00:08:41.611 0,0 243296/s 950 MiB/s 0 0 00:08:41.611 ==================================================================================== 00:08:41.611 Total 243296/s 950 MiB/s 0 0' 00:08:41.611 04:05:55 -- accel/accel.sh@20 -- # IFS=: 00:08:41.611 04:05:55 -- accel/accel.sh@20 -- # read -r var val 00:08:41.611 04:05:55 -- accel/accel.sh@15 -- # accel_perf -t 1 -w compare -y 00:08:41.611 04:05:55 -- accel/accel.sh@12 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:08:41.611 04:05:55 -- accel/accel.sh@12 -- # build_accel_config 00:08:41.611 04:05:55 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:08:41.611 04:05:55 -- accel/accel.sh@33 -- # [[ 1 -gt 0 ]] 00:08:41.611 04:05:55 -- accel/accel.sh@33 -- # accel_json_cfg+=('{"method": "dsa_scan_accel_module"}') 00:08:41.611 04:05:55 -- accel/accel.sh@34 -- # [[ 1 -gt 0 ]] 00:08:41.611 04:05:55 -- accel/accel.sh@34 -- # accel_json_cfg+=('{"method": "iaa_scan_accel_module"}') 00:08:41.611 04:05:55 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:08:41.611 04:05:55 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:08:41.611 04:05:55 -- accel/accel.sh@41 -- # local IFS=, 00:08:41.611 04:05:55 -- accel/accel.sh@42 -- # jq -r . 00:08:41.611 [2024-05-14 04:05:55.636735] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:08:41.611 [2024-05-14 04:05:55.636862] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3844260 ] 00:08:41.611 EAL: No free 2048 kB hugepages reported on node 1 00:08:41.611 [2024-05-14 04:05:55.752475] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:41.611 [2024-05-14 04:05:55.841139] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:41.611 [2024-05-14 04:05:55.845678] accel_dsa_rpc.c: 50:rpc_dsa_scan_accel_module: *NOTICE*: Enabled DSA user-mode 00:08:41.611 [2024-05-14 04:05:55.853685] accel_iaa_rpc.c: 33:rpc_iaa_scan_accel_module: *NOTICE*: Enabled IAA user-mode 00:08:48.187 04:06:02 -- accel/accel.sh@21 -- # val= 00:08:48.187 04:06:02 -- accel/accel.sh@22 -- # case "$var" in 00:08:48.187 04:06:02 -- accel/accel.sh@20 -- # IFS=: 00:08:48.187 04:06:02 -- accel/accel.sh@20 -- # read -r var val 00:08:48.187 04:06:02 -- accel/accel.sh@21 -- # val= 00:08:48.187 04:06:02 -- accel/accel.sh@22 -- # case "$var" in 00:08:48.187 04:06:02 -- accel/accel.sh@20 -- # IFS=: 00:08:48.187 04:06:02 -- accel/accel.sh@20 -- # read -r var val 00:08:48.187 04:06:02 -- accel/accel.sh@21 -- # val=0x1 00:08:48.187 04:06:02 -- accel/accel.sh@22 -- # case "$var" in 00:08:48.187 04:06:02 -- accel/accel.sh@20 -- # IFS=: 00:08:48.187 04:06:02 -- accel/accel.sh@20 -- # read -r var val 00:08:48.187 04:06:02 -- accel/accel.sh@21 -- # val= 00:08:48.187 04:06:02 -- accel/accel.sh@22 -- # case "$var" in 00:08:48.187 04:06:02 -- accel/accel.sh@20 -- # IFS=: 00:08:48.187 04:06:02 -- accel/accel.sh@20 -- # read -r var val 00:08:48.187 04:06:02 -- accel/accel.sh@21 -- # val= 00:08:48.187 04:06:02 -- accel/accel.sh@22 -- # case "$var" in 00:08:48.187 04:06:02 -- accel/accel.sh@20 -- # IFS=: 00:08:48.187 04:06:02 -- accel/accel.sh@20 -- # read -r var val 00:08:48.187 04:06:02 -- accel/accel.sh@21 -- # val=compare 00:08:48.187 04:06:02 -- accel/accel.sh@22 -- # case "$var" in 00:08:48.187 04:06:02 -- accel/accel.sh@24 -- # accel_opc=compare 00:08:48.187 04:06:02 -- accel/accel.sh@20 -- # IFS=: 00:08:48.187 04:06:02 -- accel/accel.sh@20 -- # read -r var val 00:08:48.187 04:06:02 -- accel/accel.sh@21 -- # val='4096 bytes' 00:08:48.187 04:06:02 -- accel/accel.sh@22 -- # case "$var" in 00:08:48.187 04:06:02 -- accel/accel.sh@20 -- # IFS=: 00:08:48.187 04:06:02 -- accel/accel.sh@20 -- # read -r var val 00:08:48.187 04:06:02 -- accel/accel.sh@21 -- # val= 00:08:48.187 04:06:02 -- accel/accel.sh@22 -- # case "$var" in 00:08:48.187 04:06:02 -- accel/accel.sh@20 -- # IFS=: 00:08:48.187 04:06:02 -- accel/accel.sh@20 -- # read -r var val 00:08:48.187 04:06:02 -- accel/accel.sh@21 -- # val=dsa 00:08:48.187 04:06:02 -- accel/accel.sh@22 -- # case "$var" in 00:08:48.187 04:06:02 -- accel/accel.sh@23 -- # accel_module=dsa 00:08:48.187 04:06:02 -- accel/accel.sh@20 -- # IFS=: 00:08:48.187 04:06:02 -- accel/accel.sh@20 -- # read -r var val 00:08:48.187 04:06:02 -- accel/accel.sh@21 -- # val=32 00:08:48.187 04:06:02 -- accel/accel.sh@22 -- # case "$var" in 00:08:48.187 04:06:02 -- accel/accel.sh@20 -- # IFS=: 00:08:48.187 04:06:02 -- accel/accel.sh@20 -- # read -r var val 00:08:48.187 04:06:02 -- accel/accel.sh@21 -- # val=32 00:08:48.187 04:06:02 -- accel/accel.sh@22 -- # case "$var" in 00:08:48.187 04:06:02 -- accel/accel.sh@20 -- # IFS=: 00:08:48.187 04:06:02 -- accel/accel.sh@20 -- # read -r var val 00:08:48.187 04:06:02 -- accel/accel.sh@21 -- # val=1 00:08:48.187 04:06:02 -- accel/accel.sh@22 -- # case "$var" in 00:08:48.187 04:06:02 -- accel/accel.sh@20 -- # IFS=: 00:08:48.187 04:06:02 -- accel/accel.sh@20 -- # read -r var val 00:08:48.187 04:06:02 -- accel/accel.sh@21 -- # val='1 seconds' 00:08:48.187 04:06:02 -- accel/accel.sh@22 -- # case "$var" in 00:08:48.187 04:06:02 -- accel/accel.sh@20 -- # IFS=: 00:08:48.187 04:06:02 -- accel/accel.sh@20 -- # read -r var val 00:08:48.187 04:06:02 -- accel/accel.sh@21 -- # val=Yes 00:08:48.187 04:06:02 -- accel/accel.sh@22 -- # case "$var" in 00:08:48.187 04:06:02 -- accel/accel.sh@20 -- # IFS=: 00:08:48.187 04:06:02 -- accel/accel.sh@20 -- # read -r var val 00:08:48.187 04:06:02 -- accel/accel.sh@21 -- # val= 00:08:48.187 04:06:02 -- accel/accel.sh@22 -- # case "$var" in 00:08:48.187 04:06:02 -- accel/accel.sh@20 -- # IFS=: 00:08:48.187 04:06:02 -- accel/accel.sh@20 -- # read -r var val 00:08:48.187 04:06:02 -- accel/accel.sh@21 -- # val= 00:08:48.187 04:06:02 -- accel/accel.sh@22 -- # case "$var" in 00:08:48.187 04:06:02 -- accel/accel.sh@20 -- # IFS=: 00:08:48.187 04:06:02 -- accel/accel.sh@20 -- # read -r var val 00:08:50.754 04:06:05 -- accel/accel.sh@21 -- # val= 00:08:50.754 04:06:05 -- accel/accel.sh@22 -- # case "$var" in 00:08:50.754 04:06:05 -- accel/accel.sh@20 -- # IFS=: 00:08:50.754 04:06:05 -- accel/accel.sh@20 -- # read -r var val 00:08:50.754 04:06:05 -- accel/accel.sh@21 -- # val= 00:08:50.754 04:06:05 -- accel/accel.sh@22 -- # case "$var" in 00:08:50.754 04:06:05 -- accel/accel.sh@20 -- # IFS=: 00:08:50.754 04:06:05 -- accel/accel.sh@20 -- # read -r var val 00:08:50.754 04:06:05 -- accel/accel.sh@21 -- # val= 00:08:50.754 04:06:05 -- accel/accel.sh@22 -- # case "$var" in 00:08:50.754 04:06:05 -- accel/accel.sh@20 -- # IFS=: 00:08:50.754 04:06:05 -- accel/accel.sh@20 -- # read -r var val 00:08:50.754 04:06:05 -- accel/accel.sh@21 -- # val= 00:08:50.754 04:06:05 -- accel/accel.sh@22 -- # case "$var" in 00:08:50.754 04:06:05 -- accel/accel.sh@20 -- # IFS=: 00:08:50.754 04:06:05 -- accel/accel.sh@20 -- # read -r var val 00:08:50.754 04:06:05 -- accel/accel.sh@21 -- # val= 00:08:50.754 04:06:05 -- accel/accel.sh@22 -- # case "$var" in 00:08:50.754 04:06:05 -- accel/accel.sh@20 -- # IFS=: 00:08:50.754 04:06:05 -- accel/accel.sh@20 -- # read -r var val 00:08:50.754 04:06:05 -- accel/accel.sh@21 -- # val= 00:08:50.754 04:06:05 -- accel/accel.sh@22 -- # case "$var" in 00:08:50.754 04:06:05 -- accel/accel.sh@20 -- # IFS=: 00:08:50.754 04:06:05 -- accel/accel.sh@20 -- # read -r var val 00:08:50.754 04:06:05 -- accel/accel.sh@28 -- # [[ -n dsa ]] 00:08:50.754 04:06:05 -- accel/accel.sh@28 -- # [[ -n compare ]] 00:08:50.754 04:06:05 -- accel/accel.sh@28 -- # [[ dsa == \d\s\a ]] 00:08:50.754 00:08:50.754 real 0m19.314s 00:08:50.754 user 0m6.503s 00:08:50.754 sys 0m0.443s 00:08:50.754 04:06:05 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:50.754 04:06:05 -- common/autotest_common.sh@10 -- # set +x 00:08:50.754 ************************************ 00:08:50.754 END TEST accel_compare 00:08:50.754 ************************************ 00:08:50.754 04:06:05 -- accel/accel.sh@101 -- # run_test accel_xor accel_test -t 1 -w xor -y 00:08:50.754 04:06:05 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:08:50.754 04:06:05 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:50.754 04:06:05 -- common/autotest_common.sh@10 -- # set +x 00:08:50.754 ************************************ 00:08:50.754 START TEST accel_xor 00:08:50.754 ************************************ 00:08:50.754 04:06:05 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w xor -y 00:08:50.754 04:06:05 -- accel/accel.sh@16 -- # local accel_opc 00:08:50.754 04:06:05 -- accel/accel.sh@17 -- # local accel_module 00:08:50.754 04:06:05 -- accel/accel.sh@18 -- # accel_perf -t 1 -w xor -y 00:08:50.754 04:06:05 -- accel/accel.sh@12 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:08:50.754 04:06:05 -- accel/accel.sh@12 -- # build_accel_config 00:08:50.754 04:06:05 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:08:50.754 04:06:05 -- accel/accel.sh@33 -- # [[ 1 -gt 0 ]] 00:08:50.754 04:06:05 -- accel/accel.sh@33 -- # accel_json_cfg+=('{"method": "dsa_scan_accel_module"}') 00:08:50.754 04:06:05 -- accel/accel.sh@34 -- # [[ 1 -gt 0 ]] 00:08:50.754 04:06:05 -- accel/accel.sh@34 -- # accel_json_cfg+=('{"method": "iaa_scan_accel_module"}') 00:08:50.754 04:06:05 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:08:50.754 04:06:05 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:08:50.754 04:06:05 -- accel/accel.sh@41 -- # local IFS=, 00:08:50.754 04:06:05 -- accel/accel.sh@42 -- # jq -r . 00:08:50.754 [2024-05-14 04:06:05.334272] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:08:50.754 [2024-05-14 04:06:05.334419] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3846474 ] 00:08:51.014 EAL: No free 2048 kB hugepages reported on node 1 00:08:51.014 [2024-05-14 04:06:05.465895] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:51.014 [2024-05-14 04:06:05.564859] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:51.014 [2024-05-14 04:06:05.569448] accel_dsa_rpc.c: 50:rpc_dsa_scan_accel_module: *NOTICE*: Enabled DSA user-mode 00:08:51.014 [2024-05-14 04:06:05.577431] accel_iaa_rpc.c: 33:rpc_iaa_scan_accel_module: *NOTICE*: Enabled IAA user-mode 00:09:01.004 04:06:14 -- accel/accel.sh@18 -- # out=' 00:09:01.004 SPDK Configuration: 00:09:01.004 Core mask: 0x1 00:09:01.004 00:09:01.004 Accel Perf Configuration: 00:09:01.004 Workload Type: xor 00:09:01.004 Source buffers: 2 00:09:01.004 Transfer size: 4096 bytes 00:09:01.004 Vector count 1 00:09:01.004 Module: software 00:09:01.004 Queue depth: 32 00:09:01.004 Allocate depth: 32 00:09:01.004 # threads/core: 1 00:09:01.004 Run time: 1 seconds 00:09:01.004 Verify: Yes 00:09:01.004 00:09:01.004 Running for 1 seconds... 00:09:01.004 00:09:01.004 Core,Thread Transfers Bandwidth Failed Miscompares 00:09:01.004 ------------------------------------------------------------------------------------ 00:09:01.004 0,0 461664/s 1803 MiB/s 0 0 00:09:01.004 ==================================================================================== 00:09:01.004 Total 461664/s 1803 MiB/s 0 0' 00:09:01.004 04:06:14 -- accel/accel.sh@20 -- # IFS=: 00:09:01.004 04:06:14 -- accel/accel.sh@20 -- # read -r var val 00:09:01.004 04:06:14 -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y 00:09:01.004 04:06:14 -- accel/accel.sh@12 -- # build_accel_config 00:09:01.004 04:06:14 -- accel/accel.sh@12 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:09:01.004 04:06:14 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:09:01.004 04:06:14 -- accel/accel.sh@33 -- # [[ 1 -gt 0 ]] 00:09:01.004 04:06:14 -- accel/accel.sh@33 -- # accel_json_cfg+=('{"method": "dsa_scan_accel_module"}') 00:09:01.004 04:06:14 -- accel/accel.sh@34 -- # [[ 1 -gt 0 ]] 00:09:01.004 04:06:14 -- accel/accel.sh@34 -- # accel_json_cfg+=('{"method": "iaa_scan_accel_module"}') 00:09:01.004 04:06:14 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:09:01.004 04:06:14 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:09:01.004 04:06:14 -- accel/accel.sh@41 -- # local IFS=, 00:09:01.004 04:06:14 -- accel/accel.sh@42 -- # jq -r . 00:09:01.004 [2024-05-14 04:06:15.031993] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:09:01.004 [2024-05-14 04:06:15.032110] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3848621 ] 00:09:01.004 EAL: No free 2048 kB hugepages reported on node 1 00:09:01.004 [2024-05-14 04:06:15.147989] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:01.004 [2024-05-14 04:06:15.239831] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:01.004 [2024-05-14 04:06:15.244360] accel_dsa_rpc.c: 50:rpc_dsa_scan_accel_module: *NOTICE*: Enabled DSA user-mode 00:09:01.004 [2024-05-14 04:06:15.252344] accel_iaa_rpc.c: 33:rpc_iaa_scan_accel_module: *NOTICE*: Enabled IAA user-mode 00:09:07.576 04:06:21 -- accel/accel.sh@21 -- # val= 00:09:07.576 04:06:21 -- accel/accel.sh@22 -- # case "$var" in 00:09:07.576 04:06:21 -- accel/accel.sh@20 -- # IFS=: 00:09:07.576 04:06:21 -- accel/accel.sh@20 -- # read -r var val 00:09:07.576 04:06:21 -- accel/accel.sh@21 -- # val= 00:09:07.576 04:06:21 -- accel/accel.sh@22 -- # case "$var" in 00:09:07.576 04:06:21 -- accel/accel.sh@20 -- # IFS=: 00:09:07.576 04:06:21 -- accel/accel.sh@20 -- # read -r var val 00:09:07.576 04:06:21 -- accel/accel.sh@21 -- # val=0x1 00:09:07.576 04:06:21 -- accel/accel.sh@22 -- # case "$var" in 00:09:07.576 04:06:21 -- accel/accel.sh@20 -- # IFS=: 00:09:07.576 04:06:21 -- accel/accel.sh@20 -- # read -r var val 00:09:07.576 04:06:21 -- accel/accel.sh@21 -- # val= 00:09:07.576 04:06:21 -- accel/accel.sh@22 -- # case "$var" in 00:09:07.576 04:06:21 -- accel/accel.sh@20 -- # IFS=: 00:09:07.576 04:06:21 -- accel/accel.sh@20 -- # read -r var val 00:09:07.576 04:06:21 -- accel/accel.sh@21 -- # val= 00:09:07.576 04:06:21 -- accel/accel.sh@22 -- # case "$var" in 00:09:07.576 04:06:21 -- accel/accel.sh@20 -- # IFS=: 00:09:07.576 04:06:21 -- accel/accel.sh@20 -- # read -r var val 00:09:07.576 04:06:21 -- accel/accel.sh@21 -- # val=xor 00:09:07.576 04:06:21 -- accel/accel.sh@22 -- # case "$var" in 00:09:07.576 04:06:21 -- accel/accel.sh@24 -- # accel_opc=xor 00:09:07.576 04:06:21 -- accel/accel.sh@20 -- # IFS=: 00:09:07.576 04:06:21 -- accel/accel.sh@20 -- # read -r var val 00:09:07.576 04:06:21 -- accel/accel.sh@21 -- # val=2 00:09:07.576 04:06:21 -- accel/accel.sh@22 -- # case "$var" in 00:09:07.576 04:06:21 -- accel/accel.sh@20 -- # IFS=: 00:09:07.576 04:06:21 -- accel/accel.sh@20 -- # read -r var val 00:09:07.576 04:06:21 -- accel/accel.sh@21 -- # val='4096 bytes' 00:09:07.576 04:06:21 -- accel/accel.sh@22 -- # case "$var" in 00:09:07.576 04:06:21 -- accel/accel.sh@20 -- # IFS=: 00:09:07.576 04:06:21 -- accel/accel.sh@20 -- # read -r var val 00:09:07.576 04:06:21 -- accel/accel.sh@21 -- # val= 00:09:07.576 04:06:21 -- accel/accel.sh@22 -- # case "$var" in 00:09:07.576 04:06:21 -- accel/accel.sh@20 -- # IFS=: 00:09:07.576 04:06:21 -- accel/accel.sh@20 -- # read -r var val 00:09:07.576 04:06:21 -- accel/accel.sh@21 -- # val=software 00:09:07.576 04:06:21 -- accel/accel.sh@22 -- # case "$var" in 00:09:07.576 04:06:21 -- accel/accel.sh@23 -- # accel_module=software 00:09:07.576 04:06:21 -- accel/accel.sh@20 -- # IFS=: 00:09:07.576 04:06:21 -- accel/accel.sh@20 -- # read -r var val 00:09:07.576 04:06:21 -- accel/accel.sh@21 -- # val=32 00:09:07.576 04:06:21 -- accel/accel.sh@22 -- # case "$var" in 00:09:07.576 04:06:21 -- accel/accel.sh@20 -- # IFS=: 00:09:07.576 04:06:21 -- accel/accel.sh@20 -- # read -r var val 00:09:07.576 04:06:21 -- accel/accel.sh@21 -- # val=32 00:09:07.576 04:06:21 -- accel/accel.sh@22 -- # case "$var" in 00:09:07.576 04:06:21 -- accel/accel.sh@20 -- # IFS=: 00:09:07.576 04:06:21 -- accel/accel.sh@20 -- # read -r var val 00:09:07.576 04:06:21 -- accel/accel.sh@21 -- # val=1 00:09:07.576 04:06:21 -- accel/accel.sh@22 -- # case "$var" in 00:09:07.576 04:06:21 -- accel/accel.sh@20 -- # IFS=: 00:09:07.576 04:06:21 -- accel/accel.sh@20 -- # read -r var val 00:09:07.576 04:06:21 -- accel/accel.sh@21 -- # val='1 seconds' 00:09:07.576 04:06:21 -- accel/accel.sh@22 -- # case "$var" in 00:09:07.576 04:06:21 -- accel/accel.sh@20 -- # IFS=: 00:09:07.576 04:06:21 -- accel/accel.sh@20 -- # read -r var val 00:09:07.576 04:06:21 -- accel/accel.sh@21 -- # val=Yes 00:09:07.576 04:06:21 -- accel/accel.sh@22 -- # case "$var" in 00:09:07.576 04:06:21 -- accel/accel.sh@20 -- # IFS=: 00:09:07.576 04:06:21 -- accel/accel.sh@20 -- # read -r var val 00:09:07.576 04:06:21 -- accel/accel.sh@21 -- # val= 00:09:07.576 04:06:21 -- accel/accel.sh@22 -- # case "$var" in 00:09:07.576 04:06:21 -- accel/accel.sh@20 -- # IFS=: 00:09:07.576 04:06:21 -- accel/accel.sh@20 -- # read -r var val 00:09:07.576 04:06:21 -- accel/accel.sh@21 -- # val= 00:09:07.576 04:06:21 -- accel/accel.sh@22 -- # case "$var" in 00:09:07.576 04:06:21 -- accel/accel.sh@20 -- # IFS=: 00:09:07.576 04:06:21 -- accel/accel.sh@20 -- # read -r var val 00:09:10.112 04:06:24 -- accel/accel.sh@21 -- # val= 00:09:10.112 04:06:24 -- accel/accel.sh@22 -- # case "$var" in 00:09:10.112 04:06:24 -- accel/accel.sh@20 -- # IFS=: 00:09:10.112 04:06:24 -- accel/accel.sh@20 -- # read -r var val 00:09:10.112 04:06:24 -- accel/accel.sh@21 -- # val= 00:09:10.112 04:06:24 -- accel/accel.sh@22 -- # case "$var" in 00:09:10.112 04:06:24 -- accel/accel.sh@20 -- # IFS=: 00:09:10.112 04:06:24 -- accel/accel.sh@20 -- # read -r var val 00:09:10.112 04:06:24 -- accel/accel.sh@21 -- # val= 00:09:10.112 04:06:24 -- accel/accel.sh@22 -- # case "$var" in 00:09:10.112 04:06:24 -- accel/accel.sh@20 -- # IFS=: 00:09:10.112 04:06:24 -- accel/accel.sh@20 -- # read -r var val 00:09:10.112 04:06:24 -- accel/accel.sh@21 -- # val= 00:09:10.112 04:06:24 -- accel/accel.sh@22 -- # case "$var" in 00:09:10.112 04:06:24 -- accel/accel.sh@20 -- # IFS=: 00:09:10.112 04:06:24 -- accel/accel.sh@20 -- # read -r var val 00:09:10.112 04:06:24 -- accel/accel.sh@21 -- # val= 00:09:10.112 04:06:24 -- accel/accel.sh@22 -- # case "$var" in 00:09:10.112 04:06:24 -- accel/accel.sh@20 -- # IFS=: 00:09:10.112 04:06:24 -- accel/accel.sh@20 -- # read -r var val 00:09:10.112 04:06:24 -- accel/accel.sh@21 -- # val= 00:09:10.112 04:06:24 -- accel/accel.sh@22 -- # case "$var" in 00:09:10.112 04:06:24 -- accel/accel.sh@20 -- # IFS=: 00:09:10.112 04:06:24 -- accel/accel.sh@20 -- # read -r var val 00:09:10.112 04:06:24 -- accel/accel.sh@28 -- # [[ -n software ]] 00:09:10.112 04:06:24 -- accel/accel.sh@28 -- # [[ -n xor ]] 00:09:10.112 04:06:24 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:09:10.112 00:09:10.112 real 0m19.375s 00:09:10.112 user 0m6.536s 00:09:10.112 sys 0m0.476s 00:09:10.112 04:06:24 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:10.112 04:06:24 -- common/autotest_common.sh@10 -- # set +x 00:09:10.112 ************************************ 00:09:10.112 END TEST accel_xor 00:09:10.112 ************************************ 00:09:10.112 04:06:24 -- accel/accel.sh@102 -- # run_test accel_xor accel_test -t 1 -w xor -y -x 3 00:09:10.112 04:06:24 -- common/autotest_common.sh@1077 -- # '[' 9 -le 1 ']' 00:09:10.112 04:06:24 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:09:10.112 04:06:24 -- common/autotest_common.sh@10 -- # set +x 00:09:10.112 ************************************ 00:09:10.112 START TEST accel_xor 00:09:10.112 ************************************ 00:09:10.112 04:06:24 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w xor -y -x 3 00:09:10.112 04:06:24 -- accel/accel.sh@16 -- # local accel_opc 00:09:10.112 04:06:24 -- accel/accel.sh@17 -- # local accel_module 00:09:10.112 04:06:24 -- accel/accel.sh@18 -- # accel_perf -t 1 -w xor -y -x 3 00:09:10.112 04:06:24 -- accel/accel.sh@12 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:09:10.112 04:06:24 -- accel/accel.sh@12 -- # build_accel_config 00:09:10.112 04:06:24 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:09:10.112 04:06:24 -- accel/accel.sh@33 -- # [[ 1 -gt 0 ]] 00:09:10.112 04:06:24 -- accel/accel.sh@33 -- # accel_json_cfg+=('{"method": "dsa_scan_accel_module"}') 00:09:10.112 04:06:24 -- accel/accel.sh@34 -- # [[ 1 -gt 0 ]] 00:09:10.112 04:06:24 -- accel/accel.sh@34 -- # accel_json_cfg+=('{"method": "iaa_scan_accel_module"}') 00:09:10.112 04:06:24 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:09:10.113 04:06:24 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:09:10.113 04:06:24 -- accel/accel.sh@41 -- # local IFS=, 00:09:10.113 04:06:24 -- accel/accel.sh@42 -- # jq -r . 00:09:10.373 [2024-05-14 04:06:24.727942] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:09:10.373 [2024-05-14 04:06:24.728051] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3850572 ] 00:09:10.373 EAL: No free 2048 kB hugepages reported on node 1 00:09:10.373 [2024-05-14 04:06:24.824825] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:10.373 [2024-05-14 04:06:24.914271] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:10.373 [2024-05-14 04:06:24.918747] accel_dsa_rpc.c: 50:rpc_dsa_scan_accel_module: *NOTICE*: Enabled DSA user-mode 00:09:10.373 [2024-05-14 04:06:24.926735] accel_iaa_rpc.c: 33:rpc_iaa_scan_accel_module: *NOTICE*: Enabled IAA user-mode 00:09:20.368 04:06:34 -- accel/accel.sh@18 -- # out=' 00:09:20.368 SPDK Configuration: 00:09:20.368 Core mask: 0x1 00:09:20.368 00:09:20.368 Accel Perf Configuration: 00:09:20.368 Workload Type: xor 00:09:20.368 Source buffers: 3 00:09:20.368 Transfer size: 4096 bytes 00:09:20.368 Vector count 1 00:09:20.368 Module: software 00:09:20.368 Queue depth: 32 00:09:20.368 Allocate depth: 32 00:09:20.368 # threads/core: 1 00:09:20.368 Run time: 1 seconds 00:09:20.368 Verify: Yes 00:09:20.368 00:09:20.368 Running for 1 seconds... 00:09:20.368 00:09:20.368 Core,Thread Transfers Bandwidth Failed Miscompares 00:09:20.368 ------------------------------------------------------------------------------------ 00:09:20.368 0,0 441472/s 1724 MiB/s 0 0 00:09:20.368 ==================================================================================== 00:09:20.368 Total 441472/s 1724 MiB/s 0 0' 00:09:20.368 04:06:34 -- accel/accel.sh@20 -- # IFS=: 00:09:20.368 04:06:34 -- accel/accel.sh@20 -- # read -r var val 00:09:20.368 04:06:34 -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y -x 3 00:09:20.368 04:06:34 -- accel/accel.sh@12 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:09:20.368 04:06:34 -- accel/accel.sh@12 -- # build_accel_config 00:09:20.368 04:06:34 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:09:20.368 04:06:34 -- accel/accel.sh@33 -- # [[ 1 -gt 0 ]] 00:09:20.368 04:06:34 -- accel/accel.sh@33 -- # accel_json_cfg+=('{"method": "dsa_scan_accel_module"}') 00:09:20.368 04:06:34 -- accel/accel.sh@34 -- # [[ 1 -gt 0 ]] 00:09:20.368 04:06:34 -- accel/accel.sh@34 -- # accel_json_cfg+=('{"method": "iaa_scan_accel_module"}') 00:09:20.368 04:06:34 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:09:20.368 04:06:34 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:09:20.368 04:06:34 -- accel/accel.sh@41 -- # local IFS=, 00:09:20.368 04:06:34 -- accel/accel.sh@42 -- # jq -r . 00:09:20.368 [2024-05-14 04:06:34.376373] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:09:20.368 [2024-05-14 04:06:34.376493] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3852392 ] 00:09:20.368 EAL: No free 2048 kB hugepages reported on node 1 00:09:20.368 [2024-05-14 04:06:34.491227] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:20.368 [2024-05-14 04:06:34.580223] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:20.368 [2024-05-14 04:06:34.584763] accel_dsa_rpc.c: 50:rpc_dsa_scan_accel_module: *NOTICE*: Enabled DSA user-mode 00:09:20.368 [2024-05-14 04:06:34.592745] accel_iaa_rpc.c: 33:rpc_iaa_scan_accel_module: *NOTICE*: Enabled IAA user-mode 00:09:26.997 04:06:40 -- accel/accel.sh@21 -- # val= 00:09:26.997 04:06:40 -- accel/accel.sh@22 -- # case "$var" in 00:09:26.997 04:06:40 -- accel/accel.sh@20 -- # IFS=: 00:09:26.997 04:06:40 -- accel/accel.sh@20 -- # read -r var val 00:09:26.997 04:06:40 -- accel/accel.sh@21 -- # val= 00:09:26.997 04:06:40 -- accel/accel.sh@22 -- # case "$var" in 00:09:26.997 04:06:40 -- accel/accel.sh@20 -- # IFS=: 00:09:26.997 04:06:40 -- accel/accel.sh@20 -- # read -r var val 00:09:26.997 04:06:40 -- accel/accel.sh@21 -- # val=0x1 00:09:26.997 04:06:40 -- accel/accel.sh@22 -- # case "$var" in 00:09:26.997 04:06:40 -- accel/accel.sh@20 -- # IFS=: 00:09:26.997 04:06:40 -- accel/accel.sh@20 -- # read -r var val 00:09:26.997 04:06:40 -- accel/accel.sh@21 -- # val= 00:09:26.998 04:06:40 -- accel/accel.sh@22 -- # case "$var" in 00:09:26.998 04:06:40 -- accel/accel.sh@20 -- # IFS=: 00:09:26.998 04:06:40 -- accel/accel.sh@20 -- # read -r var val 00:09:26.998 04:06:40 -- accel/accel.sh@21 -- # val= 00:09:26.998 04:06:40 -- accel/accel.sh@22 -- # case "$var" in 00:09:26.998 04:06:40 -- accel/accel.sh@20 -- # IFS=: 00:09:26.998 04:06:40 -- accel/accel.sh@20 -- # read -r var val 00:09:26.998 04:06:40 -- accel/accel.sh@21 -- # val=xor 00:09:26.998 04:06:40 -- accel/accel.sh@22 -- # case "$var" in 00:09:26.998 04:06:40 -- accel/accel.sh@24 -- # accel_opc=xor 00:09:26.998 04:06:40 -- accel/accel.sh@20 -- # IFS=: 00:09:26.998 04:06:40 -- accel/accel.sh@20 -- # read -r var val 00:09:26.998 04:06:40 -- accel/accel.sh@21 -- # val=3 00:09:26.998 04:06:40 -- accel/accel.sh@22 -- # case "$var" in 00:09:26.998 04:06:40 -- accel/accel.sh@20 -- # IFS=: 00:09:26.998 04:06:40 -- accel/accel.sh@20 -- # read -r var val 00:09:26.998 04:06:40 -- accel/accel.sh@21 -- # val='4096 bytes' 00:09:26.998 04:06:40 -- accel/accel.sh@22 -- # case "$var" in 00:09:26.998 04:06:40 -- accel/accel.sh@20 -- # IFS=: 00:09:26.998 04:06:40 -- accel/accel.sh@20 -- # read -r var val 00:09:26.998 04:06:40 -- accel/accel.sh@21 -- # val= 00:09:26.998 04:06:40 -- accel/accel.sh@22 -- # case "$var" in 00:09:26.998 04:06:40 -- accel/accel.sh@20 -- # IFS=: 00:09:26.998 04:06:40 -- accel/accel.sh@20 -- # read -r var val 00:09:26.998 04:06:40 -- accel/accel.sh@21 -- # val=software 00:09:26.998 04:06:40 -- accel/accel.sh@22 -- # case "$var" in 00:09:26.998 04:06:40 -- accel/accel.sh@23 -- # accel_module=software 00:09:26.998 04:06:40 -- accel/accel.sh@20 -- # IFS=: 00:09:26.998 04:06:40 -- accel/accel.sh@20 -- # read -r var val 00:09:26.998 04:06:40 -- accel/accel.sh@21 -- # val=32 00:09:26.998 04:06:40 -- accel/accel.sh@22 -- # case "$var" in 00:09:26.998 04:06:40 -- accel/accel.sh@20 -- # IFS=: 00:09:26.998 04:06:40 -- accel/accel.sh@20 -- # read -r var val 00:09:26.998 04:06:40 -- accel/accel.sh@21 -- # val=32 00:09:26.998 04:06:40 -- accel/accel.sh@22 -- # case "$var" in 00:09:26.998 04:06:40 -- accel/accel.sh@20 -- # IFS=: 00:09:26.998 04:06:40 -- accel/accel.sh@20 -- # read -r var val 00:09:26.998 04:06:40 -- accel/accel.sh@21 -- # val=1 00:09:26.998 04:06:40 -- accel/accel.sh@22 -- # case "$var" in 00:09:26.998 04:06:40 -- accel/accel.sh@20 -- # IFS=: 00:09:26.998 04:06:40 -- accel/accel.sh@20 -- # read -r var val 00:09:26.998 04:06:40 -- accel/accel.sh@21 -- # val='1 seconds' 00:09:26.998 04:06:40 -- accel/accel.sh@22 -- # case "$var" in 00:09:26.998 04:06:40 -- accel/accel.sh@20 -- # IFS=: 00:09:26.998 04:06:40 -- accel/accel.sh@20 -- # read -r var val 00:09:26.998 04:06:40 -- accel/accel.sh@21 -- # val=Yes 00:09:26.998 04:06:40 -- accel/accel.sh@22 -- # case "$var" in 00:09:26.998 04:06:40 -- accel/accel.sh@20 -- # IFS=: 00:09:26.998 04:06:40 -- accel/accel.sh@20 -- # read -r var val 00:09:26.998 04:06:40 -- accel/accel.sh@21 -- # val= 00:09:26.998 04:06:40 -- accel/accel.sh@22 -- # case "$var" in 00:09:26.998 04:06:40 -- accel/accel.sh@20 -- # IFS=: 00:09:26.998 04:06:40 -- accel/accel.sh@20 -- # read -r var val 00:09:26.998 04:06:40 -- accel/accel.sh@21 -- # val= 00:09:26.998 04:06:40 -- accel/accel.sh@22 -- # case "$var" in 00:09:26.998 04:06:41 -- accel/accel.sh@20 -- # IFS=: 00:09:26.998 04:06:41 -- accel/accel.sh@20 -- # read -r var val 00:09:29.556 04:06:44 -- accel/accel.sh@21 -- # val= 00:09:29.556 04:06:44 -- accel/accel.sh@22 -- # case "$var" in 00:09:29.556 04:06:44 -- accel/accel.sh@20 -- # IFS=: 00:09:29.556 04:06:44 -- accel/accel.sh@20 -- # read -r var val 00:09:29.556 04:06:44 -- accel/accel.sh@21 -- # val= 00:09:29.556 04:06:44 -- accel/accel.sh@22 -- # case "$var" in 00:09:29.556 04:06:44 -- accel/accel.sh@20 -- # IFS=: 00:09:29.556 04:06:44 -- accel/accel.sh@20 -- # read -r var val 00:09:29.556 04:06:44 -- accel/accel.sh@21 -- # val= 00:09:29.556 04:06:44 -- accel/accel.sh@22 -- # case "$var" in 00:09:29.556 04:06:44 -- accel/accel.sh@20 -- # IFS=: 00:09:29.556 04:06:44 -- accel/accel.sh@20 -- # read -r var val 00:09:29.556 04:06:44 -- accel/accel.sh@21 -- # val= 00:09:29.556 04:06:44 -- accel/accel.sh@22 -- # case "$var" in 00:09:29.556 04:06:44 -- accel/accel.sh@20 -- # IFS=: 00:09:29.556 04:06:44 -- accel/accel.sh@20 -- # read -r var val 00:09:29.556 04:06:44 -- accel/accel.sh@21 -- # val= 00:09:29.556 04:06:44 -- accel/accel.sh@22 -- # case "$var" in 00:09:29.556 04:06:44 -- accel/accel.sh@20 -- # IFS=: 00:09:29.556 04:06:44 -- accel/accel.sh@20 -- # read -r var val 00:09:29.556 04:06:44 -- accel/accel.sh@21 -- # val= 00:09:29.556 04:06:44 -- accel/accel.sh@22 -- # case "$var" in 00:09:29.556 04:06:44 -- accel/accel.sh@20 -- # IFS=: 00:09:29.556 04:06:44 -- accel/accel.sh@20 -- # read -r var val 00:09:29.556 04:06:44 -- accel/accel.sh@28 -- # [[ -n software ]] 00:09:29.556 04:06:44 -- accel/accel.sh@28 -- # [[ -n xor ]] 00:09:29.556 04:06:44 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:09:29.556 00:09:29.556 real 0m19.369s 00:09:29.556 user 0m6.561s 00:09:29.556 sys 0m0.441s 00:09:29.556 04:06:44 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:29.556 04:06:44 -- common/autotest_common.sh@10 -- # set +x 00:09:29.556 ************************************ 00:09:29.556 END TEST accel_xor 00:09:29.556 ************************************ 00:09:29.556 04:06:44 -- accel/accel.sh@103 -- # run_test accel_dif_verify accel_test -t 1 -w dif_verify 00:09:29.556 04:06:44 -- common/autotest_common.sh@1077 -- # '[' 6 -le 1 ']' 00:09:29.556 04:06:44 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:09:29.556 04:06:44 -- common/autotest_common.sh@10 -- # set +x 00:09:29.556 ************************************ 00:09:29.556 START TEST accel_dif_verify 00:09:29.556 ************************************ 00:09:29.556 04:06:44 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w dif_verify 00:09:29.556 04:06:44 -- accel/accel.sh@16 -- # local accel_opc 00:09:29.556 04:06:44 -- accel/accel.sh@17 -- # local accel_module 00:09:29.556 04:06:44 -- accel/accel.sh@18 -- # accel_perf -t 1 -w dif_verify 00:09:29.556 04:06:44 -- accel/accel.sh@12 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:09:29.556 04:06:44 -- accel/accel.sh@12 -- # build_accel_config 00:09:29.556 04:06:44 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:09:29.556 04:06:44 -- accel/accel.sh@33 -- # [[ 1 -gt 0 ]] 00:09:29.556 04:06:44 -- accel/accel.sh@33 -- # accel_json_cfg+=('{"method": "dsa_scan_accel_module"}') 00:09:29.556 04:06:44 -- accel/accel.sh@34 -- # [[ 1 -gt 0 ]] 00:09:29.556 04:06:44 -- accel/accel.sh@34 -- # accel_json_cfg+=('{"method": "iaa_scan_accel_module"}') 00:09:29.556 04:06:44 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:09:29.556 04:06:44 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:09:29.556 04:06:44 -- accel/accel.sh@41 -- # local IFS=, 00:09:29.556 04:06:44 -- accel/accel.sh@42 -- # jq -r . 00:09:29.556 [2024-05-14 04:06:44.130871] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:09:29.556 [2024-05-14 04:06:44.130989] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3854506 ] 00:09:29.816 EAL: No free 2048 kB hugepages reported on node 1 00:09:29.816 [2024-05-14 04:06:44.242809] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:29.816 [2024-05-14 04:06:44.333007] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:29.816 [2024-05-14 04:06:44.337527] accel_dsa_rpc.c: 50:rpc_dsa_scan_accel_module: *NOTICE*: Enabled DSA user-mode 00:09:29.816 [2024-05-14 04:06:44.345512] accel_iaa_rpc.c: 33:rpc_iaa_scan_accel_module: *NOTICE*: Enabled IAA user-mode 00:09:39.803 04:06:53 -- accel/accel.sh@18 -- # out=' 00:09:39.803 SPDK Configuration: 00:09:39.803 Core mask: 0x1 00:09:39.803 00:09:39.803 Accel Perf Configuration: 00:09:39.803 Workload Type: dif_verify 00:09:39.803 Vector size: 4096 bytes 00:09:39.803 Transfer size: 4096 bytes 00:09:39.803 Block size: 512 bytes 00:09:39.803 Metadata size: 8 bytes 00:09:39.803 Vector count 1 00:09:39.803 Module: dsa 00:09:39.803 Queue depth: 32 00:09:39.803 Allocate depth: 32 00:09:39.803 # threads/core: 1 00:09:39.803 Run time: 1 seconds 00:09:39.803 Verify: No 00:09:39.803 00:09:39.803 Running for 1 seconds... 00:09:39.803 00:09:39.803 Core,Thread Transfers Bandwidth Failed Miscompares 00:09:39.803 ------------------------------------------------------------------------------------ 00:09:39.803 0,0 355040/s 1408 MiB/s 0 0 00:09:39.803 ==================================================================================== 00:09:39.803 Total 355040/s 1386 MiB/s 0 0' 00:09:39.803 04:06:53 -- accel/accel.sh@20 -- # IFS=: 00:09:39.803 04:06:53 -- accel/accel.sh@20 -- # read -r var val 00:09:39.803 04:06:53 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_verify 00:09:39.803 04:06:53 -- accel/accel.sh@12 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:09:39.803 04:06:53 -- accel/accel.sh@12 -- # build_accel_config 00:09:39.803 04:06:53 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:09:39.803 04:06:53 -- accel/accel.sh@33 -- # [[ 1 -gt 0 ]] 00:09:39.803 04:06:53 -- accel/accel.sh@33 -- # accel_json_cfg+=('{"method": "dsa_scan_accel_module"}') 00:09:39.803 04:06:53 -- accel/accel.sh@34 -- # [[ 1 -gt 0 ]] 00:09:39.803 04:06:53 -- accel/accel.sh@34 -- # accel_json_cfg+=('{"method": "iaa_scan_accel_module"}') 00:09:39.803 04:06:53 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:09:39.803 04:06:53 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:09:39.803 04:06:53 -- accel/accel.sh@41 -- # local IFS=, 00:09:39.803 04:06:53 -- accel/accel.sh@42 -- # jq -r . 00:09:39.803 [2024-05-14 04:06:53.786771] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:09:39.803 [2024-05-14 04:06:53.786895] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3856327 ] 00:09:39.803 EAL: No free 2048 kB hugepages reported on node 1 00:09:39.803 [2024-05-14 04:06:53.901855] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:39.803 [2024-05-14 04:06:53.991475] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:39.803 [2024-05-14 04:06:53.995953] accel_dsa_rpc.c: 50:rpc_dsa_scan_accel_module: *NOTICE*: Enabled DSA user-mode 00:09:39.803 [2024-05-14 04:06:54.003941] accel_iaa_rpc.c: 33:rpc_iaa_scan_accel_module: *NOTICE*: Enabled IAA user-mode 00:09:46.374 04:07:00 -- accel/accel.sh@21 -- # val= 00:09:46.374 04:07:00 -- accel/accel.sh@22 -- # case "$var" in 00:09:46.374 04:07:00 -- accel/accel.sh@20 -- # IFS=: 00:09:46.374 04:07:00 -- accel/accel.sh@20 -- # read -r var val 00:09:46.374 04:07:00 -- accel/accel.sh@21 -- # val= 00:09:46.374 04:07:00 -- accel/accel.sh@22 -- # case "$var" in 00:09:46.374 04:07:00 -- accel/accel.sh@20 -- # IFS=: 00:09:46.374 04:07:00 -- accel/accel.sh@20 -- # read -r var val 00:09:46.374 04:07:00 -- accel/accel.sh@21 -- # val=0x1 00:09:46.374 04:07:00 -- accel/accel.sh@22 -- # case "$var" in 00:09:46.374 04:07:00 -- accel/accel.sh@20 -- # IFS=: 00:09:46.374 04:07:00 -- accel/accel.sh@20 -- # read -r var val 00:09:46.374 04:07:00 -- accel/accel.sh@21 -- # val= 00:09:46.374 04:07:00 -- accel/accel.sh@22 -- # case "$var" in 00:09:46.374 04:07:00 -- accel/accel.sh@20 -- # IFS=: 00:09:46.374 04:07:00 -- accel/accel.sh@20 -- # read -r var val 00:09:46.374 04:07:00 -- accel/accel.sh@21 -- # val= 00:09:46.374 04:07:00 -- accel/accel.sh@22 -- # case "$var" in 00:09:46.374 04:07:00 -- accel/accel.sh@20 -- # IFS=: 00:09:46.374 04:07:00 -- accel/accel.sh@20 -- # read -r var val 00:09:46.374 04:07:00 -- accel/accel.sh@21 -- # val=dif_verify 00:09:46.374 04:07:00 -- accel/accel.sh@22 -- # case "$var" in 00:09:46.374 04:07:00 -- accel/accel.sh@24 -- # accel_opc=dif_verify 00:09:46.374 04:07:00 -- accel/accel.sh@20 -- # IFS=: 00:09:46.374 04:07:00 -- accel/accel.sh@20 -- # read -r var val 00:09:46.374 04:07:00 -- accel/accel.sh@21 -- # val='4096 bytes' 00:09:46.374 04:07:00 -- accel/accel.sh@22 -- # case "$var" in 00:09:46.374 04:07:00 -- accel/accel.sh@20 -- # IFS=: 00:09:46.374 04:07:00 -- accel/accel.sh@20 -- # read -r var val 00:09:46.374 04:07:00 -- accel/accel.sh@21 -- # val='4096 bytes' 00:09:46.374 04:07:00 -- accel/accel.sh@22 -- # case "$var" in 00:09:46.374 04:07:00 -- accel/accel.sh@20 -- # IFS=: 00:09:46.374 04:07:00 -- accel/accel.sh@20 -- # read -r var val 00:09:46.374 04:07:00 -- accel/accel.sh@21 -- # val='512 bytes' 00:09:46.374 04:07:00 -- accel/accel.sh@22 -- # case "$var" in 00:09:46.374 04:07:00 -- accel/accel.sh@20 -- # IFS=: 00:09:46.374 04:07:00 -- accel/accel.sh@20 -- # read -r var val 00:09:46.374 04:07:00 -- accel/accel.sh@21 -- # val='8 bytes' 00:09:46.374 04:07:00 -- accel/accel.sh@22 -- # case "$var" in 00:09:46.374 04:07:00 -- accel/accel.sh@20 -- # IFS=: 00:09:46.374 04:07:00 -- accel/accel.sh@20 -- # read -r var val 00:09:46.374 04:07:00 -- accel/accel.sh@21 -- # val= 00:09:46.374 04:07:00 -- accel/accel.sh@22 -- # case "$var" in 00:09:46.374 04:07:00 -- accel/accel.sh@20 -- # IFS=: 00:09:46.374 04:07:00 -- accel/accel.sh@20 -- # read -r var val 00:09:46.374 04:07:00 -- accel/accel.sh@21 -- # val=dsa 00:09:46.374 04:07:00 -- accel/accel.sh@22 -- # case "$var" in 00:09:46.374 04:07:00 -- accel/accel.sh@23 -- # accel_module=dsa 00:09:46.374 04:07:00 -- accel/accel.sh@20 -- # IFS=: 00:09:46.374 04:07:00 -- accel/accel.sh@20 -- # read -r var val 00:09:46.374 04:07:00 -- accel/accel.sh@21 -- # val=32 00:09:46.374 04:07:00 -- accel/accel.sh@22 -- # case "$var" in 00:09:46.374 04:07:00 -- accel/accel.sh@20 -- # IFS=: 00:09:46.374 04:07:00 -- accel/accel.sh@20 -- # read -r var val 00:09:46.374 04:07:00 -- accel/accel.sh@21 -- # val=32 00:09:46.374 04:07:00 -- accel/accel.sh@22 -- # case "$var" in 00:09:46.374 04:07:00 -- accel/accel.sh@20 -- # IFS=: 00:09:46.374 04:07:00 -- accel/accel.sh@20 -- # read -r var val 00:09:46.374 04:07:00 -- accel/accel.sh@21 -- # val=1 00:09:46.374 04:07:00 -- accel/accel.sh@22 -- # case "$var" in 00:09:46.374 04:07:00 -- accel/accel.sh@20 -- # IFS=: 00:09:46.374 04:07:00 -- accel/accel.sh@20 -- # read -r var val 00:09:46.374 04:07:00 -- accel/accel.sh@21 -- # val='1 seconds' 00:09:46.374 04:07:00 -- accel/accel.sh@22 -- # case "$var" in 00:09:46.374 04:07:00 -- accel/accel.sh@20 -- # IFS=: 00:09:46.374 04:07:00 -- accel/accel.sh@20 -- # read -r var val 00:09:46.374 04:07:00 -- accel/accel.sh@21 -- # val=No 00:09:46.374 04:07:00 -- accel/accel.sh@22 -- # case "$var" in 00:09:46.374 04:07:00 -- accel/accel.sh@20 -- # IFS=: 00:09:46.374 04:07:00 -- accel/accel.sh@20 -- # read -r var val 00:09:46.374 04:07:00 -- accel/accel.sh@21 -- # val= 00:09:46.374 04:07:00 -- accel/accel.sh@22 -- # case "$var" in 00:09:46.374 04:07:00 -- accel/accel.sh@20 -- # IFS=: 00:09:46.374 04:07:00 -- accel/accel.sh@20 -- # read -r var val 00:09:46.374 04:07:00 -- accel/accel.sh@21 -- # val= 00:09:46.374 04:07:00 -- accel/accel.sh@22 -- # case "$var" in 00:09:46.375 04:07:00 -- accel/accel.sh@20 -- # IFS=: 00:09:46.375 04:07:00 -- accel/accel.sh@20 -- # read -r var val 00:09:48.913 04:07:03 -- accel/accel.sh@21 -- # val= 00:09:48.913 04:07:03 -- accel/accel.sh@22 -- # case "$var" in 00:09:48.913 04:07:03 -- accel/accel.sh@20 -- # IFS=: 00:09:48.913 04:07:03 -- accel/accel.sh@20 -- # read -r var val 00:09:48.913 04:07:03 -- accel/accel.sh@21 -- # val= 00:09:48.913 04:07:03 -- accel/accel.sh@22 -- # case "$var" in 00:09:48.913 04:07:03 -- accel/accel.sh@20 -- # IFS=: 00:09:48.913 04:07:03 -- accel/accel.sh@20 -- # read -r var val 00:09:48.913 04:07:03 -- accel/accel.sh@21 -- # val= 00:09:48.913 04:07:03 -- accel/accel.sh@22 -- # case "$var" in 00:09:48.913 04:07:03 -- accel/accel.sh@20 -- # IFS=: 00:09:48.913 04:07:03 -- accel/accel.sh@20 -- # read -r var val 00:09:48.913 04:07:03 -- accel/accel.sh@21 -- # val= 00:09:48.913 04:07:03 -- accel/accel.sh@22 -- # case "$var" in 00:09:48.913 04:07:03 -- accel/accel.sh@20 -- # IFS=: 00:09:48.913 04:07:03 -- accel/accel.sh@20 -- # read -r var val 00:09:48.913 04:07:03 -- accel/accel.sh@21 -- # val= 00:09:48.913 04:07:03 -- accel/accel.sh@22 -- # case "$var" in 00:09:48.913 04:07:03 -- accel/accel.sh@20 -- # IFS=: 00:09:48.913 04:07:03 -- accel/accel.sh@20 -- # read -r var val 00:09:48.913 04:07:03 -- accel/accel.sh@21 -- # val= 00:09:48.913 04:07:03 -- accel/accel.sh@22 -- # case "$var" in 00:09:48.913 04:07:03 -- accel/accel.sh@20 -- # IFS=: 00:09:48.913 04:07:03 -- accel/accel.sh@20 -- # read -r var val 00:09:48.913 04:07:03 -- accel/accel.sh@28 -- # [[ -n dsa ]] 00:09:48.913 04:07:03 -- accel/accel.sh@28 -- # [[ -n dif_verify ]] 00:09:48.913 04:07:03 -- accel/accel.sh@28 -- # [[ dsa == \d\s\a ]] 00:09:48.913 00:09:48.913 real 0m19.325s 00:09:48.913 user 0m6.534s 00:09:48.913 sys 0m0.431s 00:09:48.913 04:07:03 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:48.913 04:07:03 -- common/autotest_common.sh@10 -- # set +x 00:09:48.913 ************************************ 00:09:48.913 END TEST accel_dif_verify 00:09:48.913 ************************************ 00:09:48.913 04:07:03 -- accel/accel.sh@104 -- # run_test accel_dif_generate accel_test -t 1 -w dif_generate 00:09:48.914 04:07:03 -- common/autotest_common.sh@1077 -- # '[' 6 -le 1 ']' 00:09:48.914 04:07:03 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:09:48.914 04:07:03 -- common/autotest_common.sh@10 -- # set +x 00:09:48.914 ************************************ 00:09:48.914 START TEST accel_dif_generate 00:09:48.914 ************************************ 00:09:48.914 04:07:03 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w dif_generate 00:09:48.914 04:07:03 -- accel/accel.sh@16 -- # local accel_opc 00:09:48.914 04:07:03 -- accel/accel.sh@17 -- # local accel_module 00:09:48.914 04:07:03 -- accel/accel.sh@18 -- # accel_perf -t 1 -w dif_generate 00:09:48.914 04:07:03 -- accel/accel.sh@12 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:09:48.914 04:07:03 -- accel/accel.sh@12 -- # build_accel_config 00:09:48.914 04:07:03 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:09:48.914 04:07:03 -- accel/accel.sh@33 -- # [[ 1 -gt 0 ]] 00:09:48.914 04:07:03 -- accel/accel.sh@33 -- # accel_json_cfg+=('{"method": "dsa_scan_accel_module"}') 00:09:48.914 04:07:03 -- accel/accel.sh@34 -- # [[ 1 -gt 0 ]] 00:09:48.914 04:07:03 -- accel/accel.sh@34 -- # accel_json_cfg+=('{"method": "iaa_scan_accel_module"}') 00:09:48.914 04:07:03 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:09:48.914 04:07:03 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:09:48.914 04:07:03 -- accel/accel.sh@41 -- # local IFS=, 00:09:48.914 04:07:03 -- accel/accel.sh@42 -- # jq -r . 00:09:48.914 [2024-05-14 04:07:03.489670] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:09:48.914 [2024-05-14 04:07:03.489796] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3858180 ] 00:09:49.187 EAL: No free 2048 kB hugepages reported on node 1 00:09:49.187 [2024-05-14 04:07:03.607241] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:49.187 [2024-05-14 04:07:03.698801] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:49.187 [2024-05-14 04:07:03.703326] accel_dsa_rpc.c: 50:rpc_dsa_scan_accel_module: *NOTICE*: Enabled DSA user-mode 00:09:49.187 [2024-05-14 04:07:03.711309] accel_iaa_rpc.c: 33:rpc_iaa_scan_accel_module: *NOTICE*: Enabled IAA user-mode 00:09:59.191 04:07:13 -- accel/accel.sh@18 -- # out=' 00:09:59.191 SPDK Configuration: 00:09:59.191 Core mask: 0x1 00:09:59.191 00:09:59.191 Accel Perf Configuration: 00:09:59.191 Workload Type: dif_generate 00:09:59.191 Vector size: 4096 bytes 00:09:59.191 Transfer size: 4096 bytes 00:09:59.191 Block size: 512 bytes 00:09:59.191 Metadata size: 8 bytes 00:09:59.191 Vector count 1 00:09:59.191 Module: software 00:09:59.191 Queue depth: 32 00:09:59.191 Allocate depth: 32 00:09:59.191 # threads/core: 1 00:09:59.191 Run time: 1 seconds 00:09:59.191 Verify: No 00:09:59.191 00:09:59.191 Running for 1 seconds... 00:09:59.191 00:09:59.191 Core,Thread Transfers Bandwidth Failed Miscompares 00:09:59.191 ------------------------------------------------------------------------------------ 00:09:59.191 0,0 158368/s 628 MiB/s 0 0 00:09:59.191 ==================================================================================== 00:09:59.191 Total 158368/s 618 MiB/s 0 0' 00:09:59.192 04:07:13 -- accel/accel.sh@20 -- # IFS=: 00:09:59.192 04:07:13 -- accel/accel.sh@20 -- # read -r var val 00:09:59.192 04:07:13 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate 00:09:59.192 04:07:13 -- accel/accel.sh@12 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:09:59.192 04:07:13 -- accel/accel.sh@12 -- # build_accel_config 00:09:59.192 04:07:13 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:09:59.192 04:07:13 -- accel/accel.sh@33 -- # [[ 1 -gt 0 ]] 00:09:59.192 04:07:13 -- accel/accel.sh@33 -- # accel_json_cfg+=('{"method": "dsa_scan_accel_module"}') 00:09:59.192 04:07:13 -- accel/accel.sh@34 -- # [[ 1 -gt 0 ]] 00:09:59.192 04:07:13 -- accel/accel.sh@34 -- # accel_json_cfg+=('{"method": "iaa_scan_accel_module"}') 00:09:59.192 04:07:13 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:09:59.192 04:07:13 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:09:59.192 04:07:13 -- accel/accel.sh@41 -- # local IFS=, 00:09:59.192 04:07:13 -- accel/accel.sh@42 -- # jq -r . 00:09:59.192 [2024-05-14 04:07:13.196118] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:09:59.192 [2024-05-14 04:07:13.196252] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3860252 ] 00:09:59.192 EAL: No free 2048 kB hugepages reported on node 1 00:09:59.192 [2024-05-14 04:07:13.308203] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:59.192 [2024-05-14 04:07:13.397847] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:59.192 [2024-05-14 04:07:13.402386] accel_dsa_rpc.c: 50:rpc_dsa_scan_accel_module: *NOTICE*: Enabled DSA user-mode 00:09:59.192 [2024-05-14 04:07:13.410368] accel_iaa_rpc.c: 33:rpc_iaa_scan_accel_module: *NOTICE*: Enabled IAA user-mode 00:10:05.795 04:07:19 -- accel/accel.sh@21 -- # val= 00:10:05.795 04:07:19 -- accel/accel.sh@22 -- # case "$var" in 00:10:05.795 04:07:19 -- accel/accel.sh@20 -- # IFS=: 00:10:05.795 04:07:19 -- accel/accel.sh@20 -- # read -r var val 00:10:05.795 04:07:19 -- accel/accel.sh@21 -- # val= 00:10:05.795 04:07:19 -- accel/accel.sh@22 -- # case "$var" in 00:10:05.795 04:07:19 -- accel/accel.sh@20 -- # IFS=: 00:10:05.795 04:07:19 -- accel/accel.sh@20 -- # read -r var val 00:10:05.795 04:07:19 -- accel/accel.sh@21 -- # val=0x1 00:10:05.795 04:07:19 -- accel/accel.sh@22 -- # case "$var" in 00:10:05.795 04:07:19 -- accel/accel.sh@20 -- # IFS=: 00:10:05.795 04:07:19 -- accel/accel.sh@20 -- # read -r var val 00:10:05.795 04:07:19 -- accel/accel.sh@21 -- # val= 00:10:05.795 04:07:19 -- accel/accel.sh@22 -- # case "$var" in 00:10:05.795 04:07:19 -- accel/accel.sh@20 -- # IFS=: 00:10:05.795 04:07:19 -- accel/accel.sh@20 -- # read -r var val 00:10:05.795 04:07:19 -- accel/accel.sh@21 -- # val= 00:10:05.795 04:07:19 -- accel/accel.sh@22 -- # case "$var" in 00:10:05.795 04:07:19 -- accel/accel.sh@20 -- # IFS=: 00:10:05.795 04:07:19 -- accel/accel.sh@20 -- # read -r var val 00:10:05.795 04:07:19 -- accel/accel.sh@21 -- # val=dif_generate 00:10:05.795 04:07:19 -- accel/accel.sh@22 -- # case "$var" in 00:10:05.795 04:07:19 -- accel/accel.sh@24 -- # accel_opc=dif_generate 00:10:05.795 04:07:19 -- accel/accel.sh@20 -- # IFS=: 00:10:05.795 04:07:19 -- accel/accel.sh@20 -- # read -r var val 00:10:05.795 04:07:19 -- accel/accel.sh@21 -- # val='4096 bytes' 00:10:05.795 04:07:19 -- accel/accel.sh@22 -- # case "$var" in 00:10:05.795 04:07:19 -- accel/accel.sh@20 -- # IFS=: 00:10:05.795 04:07:19 -- accel/accel.sh@20 -- # read -r var val 00:10:05.795 04:07:19 -- accel/accel.sh@21 -- # val='4096 bytes' 00:10:05.795 04:07:19 -- accel/accel.sh@22 -- # case "$var" in 00:10:05.795 04:07:19 -- accel/accel.sh@20 -- # IFS=: 00:10:05.795 04:07:19 -- accel/accel.sh@20 -- # read -r var val 00:10:05.795 04:07:19 -- accel/accel.sh@21 -- # val='512 bytes' 00:10:05.795 04:07:19 -- accel/accel.sh@22 -- # case "$var" in 00:10:05.795 04:07:19 -- accel/accel.sh@20 -- # IFS=: 00:10:05.795 04:07:19 -- accel/accel.sh@20 -- # read -r var val 00:10:05.795 04:07:19 -- accel/accel.sh@21 -- # val='8 bytes' 00:10:05.795 04:07:19 -- accel/accel.sh@22 -- # case "$var" in 00:10:05.795 04:07:19 -- accel/accel.sh@20 -- # IFS=: 00:10:05.795 04:07:19 -- accel/accel.sh@20 -- # read -r var val 00:10:05.795 04:07:19 -- accel/accel.sh@21 -- # val= 00:10:05.795 04:07:19 -- accel/accel.sh@22 -- # case "$var" in 00:10:05.795 04:07:19 -- accel/accel.sh@20 -- # IFS=: 00:10:05.795 04:07:19 -- accel/accel.sh@20 -- # read -r var val 00:10:05.795 04:07:19 -- accel/accel.sh@21 -- # val=software 00:10:05.795 04:07:19 -- accel/accel.sh@22 -- # case "$var" in 00:10:05.795 04:07:19 -- accel/accel.sh@23 -- # accel_module=software 00:10:05.795 04:07:19 -- accel/accel.sh@20 -- # IFS=: 00:10:05.795 04:07:19 -- accel/accel.sh@20 -- # read -r var val 00:10:05.795 04:07:19 -- accel/accel.sh@21 -- # val=32 00:10:05.795 04:07:19 -- accel/accel.sh@22 -- # case "$var" in 00:10:05.795 04:07:19 -- accel/accel.sh@20 -- # IFS=: 00:10:05.795 04:07:19 -- accel/accel.sh@20 -- # read -r var val 00:10:05.795 04:07:19 -- accel/accel.sh@21 -- # val=32 00:10:05.795 04:07:19 -- accel/accel.sh@22 -- # case "$var" in 00:10:05.795 04:07:19 -- accel/accel.sh@20 -- # IFS=: 00:10:05.795 04:07:19 -- accel/accel.sh@20 -- # read -r var val 00:10:05.795 04:07:19 -- accel/accel.sh@21 -- # val=1 00:10:05.795 04:07:19 -- accel/accel.sh@22 -- # case "$var" in 00:10:05.795 04:07:19 -- accel/accel.sh@20 -- # IFS=: 00:10:05.795 04:07:19 -- accel/accel.sh@20 -- # read -r var val 00:10:05.795 04:07:19 -- accel/accel.sh@21 -- # val='1 seconds' 00:10:05.795 04:07:19 -- accel/accel.sh@22 -- # case "$var" in 00:10:05.795 04:07:19 -- accel/accel.sh@20 -- # IFS=: 00:10:05.795 04:07:19 -- accel/accel.sh@20 -- # read -r var val 00:10:05.795 04:07:19 -- accel/accel.sh@21 -- # val=No 00:10:05.795 04:07:19 -- accel/accel.sh@22 -- # case "$var" in 00:10:05.795 04:07:19 -- accel/accel.sh@20 -- # IFS=: 00:10:05.795 04:07:19 -- accel/accel.sh@20 -- # read -r var val 00:10:05.795 04:07:19 -- accel/accel.sh@21 -- # val= 00:10:05.795 04:07:19 -- accel/accel.sh@22 -- # case "$var" in 00:10:05.795 04:07:19 -- accel/accel.sh@20 -- # IFS=: 00:10:05.795 04:07:19 -- accel/accel.sh@20 -- # read -r var val 00:10:05.795 04:07:19 -- accel/accel.sh@21 -- # val= 00:10:05.795 04:07:19 -- accel/accel.sh@22 -- # case "$var" in 00:10:05.795 04:07:19 -- accel/accel.sh@20 -- # IFS=: 00:10:05.795 04:07:19 -- accel/accel.sh@20 -- # read -r var val 00:10:08.324 04:07:22 -- accel/accel.sh@21 -- # val= 00:10:08.324 04:07:22 -- accel/accel.sh@22 -- # case "$var" in 00:10:08.324 04:07:22 -- accel/accel.sh@20 -- # IFS=: 00:10:08.324 04:07:22 -- accel/accel.sh@20 -- # read -r var val 00:10:08.324 04:07:22 -- accel/accel.sh@21 -- # val= 00:10:08.324 04:07:22 -- accel/accel.sh@22 -- # case "$var" in 00:10:08.324 04:07:22 -- accel/accel.sh@20 -- # IFS=: 00:10:08.324 04:07:22 -- accel/accel.sh@20 -- # read -r var val 00:10:08.324 04:07:22 -- accel/accel.sh@21 -- # val= 00:10:08.324 04:07:22 -- accel/accel.sh@22 -- # case "$var" in 00:10:08.324 04:07:22 -- accel/accel.sh@20 -- # IFS=: 00:10:08.324 04:07:22 -- accel/accel.sh@20 -- # read -r var val 00:10:08.324 04:07:22 -- accel/accel.sh@21 -- # val= 00:10:08.324 04:07:22 -- accel/accel.sh@22 -- # case "$var" in 00:10:08.324 04:07:22 -- accel/accel.sh@20 -- # IFS=: 00:10:08.324 04:07:22 -- accel/accel.sh@20 -- # read -r var val 00:10:08.324 04:07:22 -- accel/accel.sh@21 -- # val= 00:10:08.324 04:07:22 -- accel/accel.sh@22 -- # case "$var" in 00:10:08.324 04:07:22 -- accel/accel.sh@20 -- # IFS=: 00:10:08.324 04:07:22 -- accel/accel.sh@20 -- # read -r var val 00:10:08.324 04:07:22 -- accel/accel.sh@21 -- # val= 00:10:08.324 04:07:22 -- accel/accel.sh@22 -- # case "$var" in 00:10:08.324 04:07:22 -- accel/accel.sh@20 -- # IFS=: 00:10:08.324 04:07:22 -- accel/accel.sh@20 -- # read -r var val 00:10:08.324 04:07:22 -- accel/accel.sh@28 -- # [[ -n software ]] 00:10:08.325 04:07:22 -- accel/accel.sh@28 -- # [[ -n dif_generate ]] 00:10:08.325 04:07:22 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:10:08.325 00:10:08.325 real 0m19.373s 00:10:08.325 user 0m6.534s 00:10:08.325 sys 0m0.463s 00:10:08.325 04:07:22 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:08.325 04:07:22 -- common/autotest_common.sh@10 -- # set +x 00:10:08.325 ************************************ 00:10:08.325 END TEST accel_dif_generate 00:10:08.325 ************************************ 00:10:08.325 04:07:22 -- accel/accel.sh@105 -- # run_test accel_dif_generate_copy accel_test -t 1 -w dif_generate_copy 00:10:08.325 04:07:22 -- common/autotest_common.sh@1077 -- # '[' 6 -le 1 ']' 00:10:08.325 04:07:22 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:10:08.325 04:07:22 -- common/autotest_common.sh@10 -- # set +x 00:10:08.325 ************************************ 00:10:08.325 START TEST accel_dif_generate_copy 00:10:08.325 ************************************ 00:10:08.325 04:07:22 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w dif_generate_copy 00:10:08.325 04:07:22 -- accel/accel.sh@16 -- # local accel_opc 00:10:08.325 04:07:22 -- accel/accel.sh@17 -- # local accel_module 00:10:08.325 04:07:22 -- accel/accel.sh@18 -- # accel_perf -t 1 -w dif_generate_copy 00:10:08.325 04:07:22 -- accel/accel.sh@12 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:10:08.325 04:07:22 -- accel/accel.sh@12 -- # build_accel_config 00:10:08.325 04:07:22 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:10:08.325 04:07:22 -- accel/accel.sh@33 -- # [[ 1 -gt 0 ]] 00:10:08.325 04:07:22 -- accel/accel.sh@33 -- # accel_json_cfg+=('{"method": "dsa_scan_accel_module"}') 00:10:08.325 04:07:22 -- accel/accel.sh@34 -- # [[ 1 -gt 0 ]] 00:10:08.325 04:07:22 -- accel/accel.sh@34 -- # accel_json_cfg+=('{"method": "iaa_scan_accel_module"}') 00:10:08.325 04:07:22 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:10:08.325 04:07:22 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:10:08.325 04:07:22 -- accel/accel.sh@41 -- # local IFS=, 00:10:08.325 04:07:22 -- accel/accel.sh@42 -- # jq -r . 00:10:08.325 [2024-05-14 04:07:22.889182] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:10:08.325 [2024-05-14 04:07:22.889298] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3862101 ] 00:10:08.584 EAL: No free 2048 kB hugepages reported on node 1 00:10:08.584 [2024-05-14 04:07:22.999939] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:08.584 [2024-05-14 04:07:23.089493] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:08.584 [2024-05-14 04:07:23.093970] accel_dsa_rpc.c: 50:rpc_dsa_scan_accel_module: *NOTICE*: Enabled DSA user-mode 00:10:08.584 [2024-05-14 04:07:23.101957] accel_iaa_rpc.c: 33:rpc_iaa_scan_accel_module: *NOTICE*: Enabled IAA user-mode 00:10:18.572 04:07:32 -- accel/accel.sh@18 -- # out=' 00:10:18.572 SPDK Configuration: 00:10:18.572 Core mask: 0x1 00:10:18.572 00:10:18.572 Accel Perf Configuration: 00:10:18.572 Workload Type: dif_generate_copy 00:10:18.572 Vector size: 4096 bytes 00:10:18.572 Transfer size: 4096 bytes 00:10:18.572 Vector count 1 00:10:18.572 Module: dsa 00:10:18.572 Queue depth: 32 00:10:18.572 Allocate depth: 32 00:10:18.572 # threads/core: 1 00:10:18.572 Run time: 1 seconds 00:10:18.572 Verify: No 00:10:18.572 00:10:18.572 Running for 1 seconds... 00:10:18.572 00:10:18.572 Core,Thread Transfers Bandwidth Failed Miscompares 00:10:18.572 ------------------------------------------------------------------------------------ 00:10:18.572 0,0 336128/s 1333 MiB/s 0 0 00:10:18.572 ==================================================================================== 00:10:18.572 Total 336128/s 1313 MiB/s 0 0' 00:10:18.572 04:07:32 -- accel/accel.sh@20 -- # IFS=: 00:10:18.572 04:07:32 -- accel/accel.sh@20 -- # read -r var val 00:10:18.572 04:07:32 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate_copy 00:10:18.572 04:07:32 -- accel/accel.sh@12 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:10:18.572 04:07:32 -- accel/accel.sh@12 -- # build_accel_config 00:10:18.572 04:07:32 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:10:18.572 04:07:32 -- accel/accel.sh@33 -- # [[ 1 -gt 0 ]] 00:10:18.573 04:07:32 -- accel/accel.sh@33 -- # accel_json_cfg+=('{"method": "dsa_scan_accel_module"}') 00:10:18.573 04:07:32 -- accel/accel.sh@34 -- # [[ 1 -gt 0 ]] 00:10:18.573 04:07:32 -- accel/accel.sh@34 -- # accel_json_cfg+=('{"method": "iaa_scan_accel_module"}') 00:10:18.573 04:07:32 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:10:18.573 04:07:32 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:10:18.573 04:07:32 -- accel/accel.sh@41 -- # local IFS=, 00:10:18.573 04:07:32 -- accel/accel.sh@42 -- # jq -r . 00:10:18.573 [2024-05-14 04:07:32.545245] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:10:18.573 [2024-05-14 04:07:32.545368] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3863924 ] 00:10:18.573 EAL: No free 2048 kB hugepages reported on node 1 00:10:18.573 [2024-05-14 04:07:32.663649] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:18.573 [2024-05-14 04:07:32.755111] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:18.573 [2024-05-14 04:07:32.759646] accel_dsa_rpc.c: 50:rpc_dsa_scan_accel_module: *NOTICE*: Enabled DSA user-mode 00:10:18.573 [2024-05-14 04:07:32.767631] accel_iaa_rpc.c: 33:rpc_iaa_scan_accel_module: *NOTICE*: Enabled IAA user-mode 00:10:25.150 04:07:39 -- accel/accel.sh@21 -- # val= 00:10:25.150 04:07:39 -- accel/accel.sh@22 -- # case "$var" in 00:10:25.150 04:07:39 -- accel/accel.sh@20 -- # IFS=: 00:10:25.150 04:07:39 -- accel/accel.sh@20 -- # read -r var val 00:10:25.150 04:07:39 -- accel/accel.sh@21 -- # val= 00:10:25.150 04:07:39 -- accel/accel.sh@22 -- # case "$var" in 00:10:25.150 04:07:39 -- accel/accel.sh@20 -- # IFS=: 00:10:25.150 04:07:39 -- accel/accel.sh@20 -- # read -r var val 00:10:25.150 04:07:39 -- accel/accel.sh@21 -- # val=0x1 00:10:25.150 04:07:39 -- accel/accel.sh@22 -- # case "$var" in 00:10:25.150 04:07:39 -- accel/accel.sh@20 -- # IFS=: 00:10:25.150 04:07:39 -- accel/accel.sh@20 -- # read -r var val 00:10:25.150 04:07:39 -- accel/accel.sh@21 -- # val= 00:10:25.150 04:07:39 -- accel/accel.sh@22 -- # case "$var" in 00:10:25.150 04:07:39 -- accel/accel.sh@20 -- # IFS=: 00:10:25.150 04:07:39 -- accel/accel.sh@20 -- # read -r var val 00:10:25.150 04:07:39 -- accel/accel.sh@21 -- # val= 00:10:25.150 04:07:39 -- accel/accel.sh@22 -- # case "$var" in 00:10:25.150 04:07:39 -- accel/accel.sh@20 -- # IFS=: 00:10:25.150 04:07:39 -- accel/accel.sh@20 -- # read -r var val 00:10:25.150 04:07:39 -- accel/accel.sh@21 -- # val=dif_generate_copy 00:10:25.150 04:07:39 -- accel/accel.sh@22 -- # case "$var" in 00:10:25.150 04:07:39 -- accel/accel.sh@24 -- # accel_opc=dif_generate_copy 00:10:25.150 04:07:39 -- accel/accel.sh@20 -- # IFS=: 00:10:25.150 04:07:39 -- accel/accel.sh@20 -- # read -r var val 00:10:25.150 04:07:39 -- accel/accel.sh@21 -- # val='4096 bytes' 00:10:25.150 04:07:39 -- accel/accel.sh@22 -- # case "$var" in 00:10:25.150 04:07:39 -- accel/accel.sh@20 -- # IFS=: 00:10:25.150 04:07:39 -- accel/accel.sh@20 -- # read -r var val 00:10:25.150 04:07:39 -- accel/accel.sh@21 -- # val='4096 bytes' 00:10:25.150 04:07:39 -- accel/accel.sh@22 -- # case "$var" in 00:10:25.150 04:07:39 -- accel/accel.sh@20 -- # IFS=: 00:10:25.150 04:07:39 -- accel/accel.sh@20 -- # read -r var val 00:10:25.150 04:07:39 -- accel/accel.sh@21 -- # val= 00:10:25.150 04:07:39 -- accel/accel.sh@22 -- # case "$var" in 00:10:25.150 04:07:39 -- accel/accel.sh@20 -- # IFS=: 00:10:25.150 04:07:39 -- accel/accel.sh@20 -- # read -r var val 00:10:25.150 04:07:39 -- accel/accel.sh@21 -- # val=dsa 00:10:25.150 04:07:39 -- accel/accel.sh@22 -- # case "$var" in 00:10:25.150 04:07:39 -- accel/accel.sh@23 -- # accel_module=dsa 00:10:25.150 04:07:39 -- accel/accel.sh@20 -- # IFS=: 00:10:25.150 04:07:39 -- accel/accel.sh@20 -- # read -r var val 00:10:25.150 04:07:39 -- accel/accel.sh@21 -- # val=32 00:10:25.150 04:07:39 -- accel/accel.sh@22 -- # case "$var" in 00:10:25.150 04:07:39 -- accel/accel.sh@20 -- # IFS=: 00:10:25.150 04:07:39 -- accel/accel.sh@20 -- # read -r var val 00:10:25.150 04:07:39 -- accel/accel.sh@21 -- # val=32 00:10:25.150 04:07:39 -- accel/accel.sh@22 -- # case "$var" in 00:10:25.150 04:07:39 -- accel/accel.sh@20 -- # IFS=: 00:10:25.150 04:07:39 -- accel/accel.sh@20 -- # read -r var val 00:10:25.150 04:07:39 -- accel/accel.sh@21 -- # val=1 00:10:25.150 04:07:39 -- accel/accel.sh@22 -- # case "$var" in 00:10:25.150 04:07:39 -- accel/accel.sh@20 -- # IFS=: 00:10:25.150 04:07:39 -- accel/accel.sh@20 -- # read -r var val 00:10:25.150 04:07:39 -- accel/accel.sh@21 -- # val='1 seconds' 00:10:25.150 04:07:39 -- accel/accel.sh@22 -- # case "$var" in 00:10:25.150 04:07:39 -- accel/accel.sh@20 -- # IFS=: 00:10:25.150 04:07:39 -- accel/accel.sh@20 -- # read -r var val 00:10:25.150 04:07:39 -- accel/accel.sh@21 -- # val=No 00:10:25.150 04:07:39 -- accel/accel.sh@22 -- # case "$var" in 00:10:25.150 04:07:39 -- accel/accel.sh@20 -- # IFS=: 00:10:25.150 04:07:39 -- accel/accel.sh@20 -- # read -r var val 00:10:25.150 04:07:39 -- accel/accel.sh@21 -- # val= 00:10:25.150 04:07:39 -- accel/accel.sh@22 -- # case "$var" in 00:10:25.150 04:07:39 -- accel/accel.sh@20 -- # IFS=: 00:10:25.150 04:07:39 -- accel/accel.sh@20 -- # read -r var val 00:10:25.150 04:07:39 -- accel/accel.sh@21 -- # val= 00:10:25.150 04:07:39 -- accel/accel.sh@22 -- # case "$var" in 00:10:25.150 04:07:39 -- accel/accel.sh@20 -- # IFS=: 00:10:25.150 04:07:39 -- accel/accel.sh@20 -- # read -r var val 00:10:27.684 04:07:42 -- accel/accel.sh@21 -- # val= 00:10:27.684 04:07:42 -- accel/accel.sh@22 -- # case "$var" in 00:10:27.684 04:07:42 -- accel/accel.sh@20 -- # IFS=: 00:10:27.684 04:07:42 -- accel/accel.sh@20 -- # read -r var val 00:10:27.684 04:07:42 -- accel/accel.sh@21 -- # val= 00:10:27.684 04:07:42 -- accel/accel.sh@22 -- # case "$var" in 00:10:27.684 04:07:42 -- accel/accel.sh@20 -- # IFS=: 00:10:27.684 04:07:42 -- accel/accel.sh@20 -- # read -r var val 00:10:27.684 04:07:42 -- accel/accel.sh@21 -- # val= 00:10:27.684 04:07:42 -- accel/accel.sh@22 -- # case "$var" in 00:10:27.684 04:07:42 -- accel/accel.sh@20 -- # IFS=: 00:10:27.684 04:07:42 -- accel/accel.sh@20 -- # read -r var val 00:10:27.684 04:07:42 -- accel/accel.sh@21 -- # val= 00:10:27.684 04:07:42 -- accel/accel.sh@22 -- # case "$var" in 00:10:27.684 04:07:42 -- accel/accel.sh@20 -- # IFS=: 00:10:27.684 04:07:42 -- accel/accel.sh@20 -- # read -r var val 00:10:27.684 04:07:42 -- accel/accel.sh@21 -- # val= 00:10:27.684 04:07:42 -- accel/accel.sh@22 -- # case "$var" in 00:10:27.684 04:07:42 -- accel/accel.sh@20 -- # IFS=: 00:10:27.684 04:07:42 -- accel/accel.sh@20 -- # read -r var val 00:10:27.684 04:07:42 -- accel/accel.sh@21 -- # val= 00:10:27.684 04:07:42 -- accel/accel.sh@22 -- # case "$var" in 00:10:27.684 04:07:42 -- accel/accel.sh@20 -- # IFS=: 00:10:27.684 04:07:42 -- accel/accel.sh@20 -- # read -r var val 00:10:27.684 04:07:42 -- accel/accel.sh@28 -- # [[ -n dsa ]] 00:10:27.684 04:07:42 -- accel/accel.sh@28 -- # [[ -n dif_generate_copy ]] 00:10:27.684 04:07:42 -- accel/accel.sh@28 -- # [[ dsa == \d\s\a ]] 00:10:27.684 00:10:27.684 real 0m19.331s 00:10:27.684 user 0m6.533s 00:10:27.684 sys 0m0.449s 00:10:27.684 04:07:42 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:27.684 04:07:42 -- common/autotest_common.sh@10 -- # set +x 00:10:27.684 ************************************ 00:10:27.684 END TEST accel_dif_generate_copy 00:10:27.684 ************************************ 00:10:27.684 04:07:42 -- accel/accel.sh@107 -- # [[ y == y ]] 00:10:27.684 04:07:42 -- accel/accel.sh@108 -- # run_test accel_comp accel_test -t 1 -w compress -l /var/jenkins/workspace/dsa-phy-autotest/spdk/test/accel/bib 00:10:27.684 04:07:42 -- common/autotest_common.sh@1077 -- # '[' 8 -le 1 ']' 00:10:27.684 04:07:42 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:10:27.684 04:07:42 -- common/autotest_common.sh@10 -- # set +x 00:10:27.684 ************************************ 00:10:27.684 START TEST accel_comp 00:10:27.684 ************************************ 00:10:27.684 04:07:42 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w compress -l /var/jenkins/workspace/dsa-phy-autotest/spdk/test/accel/bib 00:10:27.684 04:07:42 -- accel/accel.sh@16 -- # local accel_opc 00:10:27.684 04:07:42 -- accel/accel.sh@17 -- # local accel_module 00:10:27.684 04:07:42 -- accel/accel.sh@18 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/dsa-phy-autotest/spdk/test/accel/bib 00:10:27.684 04:07:42 -- accel/accel.sh@12 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/dsa-phy-autotest/spdk/test/accel/bib 00:10:27.684 04:07:42 -- accel/accel.sh@12 -- # build_accel_config 00:10:27.684 04:07:42 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:10:27.684 04:07:42 -- accel/accel.sh@33 -- # [[ 1 -gt 0 ]] 00:10:27.684 04:07:42 -- accel/accel.sh@33 -- # accel_json_cfg+=('{"method": "dsa_scan_accel_module"}') 00:10:27.684 04:07:42 -- accel/accel.sh@34 -- # [[ 1 -gt 0 ]] 00:10:27.684 04:07:42 -- accel/accel.sh@34 -- # accel_json_cfg+=('{"method": "iaa_scan_accel_module"}') 00:10:27.684 04:07:42 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:10:27.684 04:07:42 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:10:27.684 04:07:42 -- accel/accel.sh@41 -- # local IFS=, 00:10:27.684 04:07:42 -- accel/accel.sh@42 -- # jq -r . 00:10:27.684 [2024-05-14 04:07:42.249813] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:10:27.684 [2024-05-14 04:07:42.249932] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3866036 ] 00:10:27.943 EAL: No free 2048 kB hugepages reported on node 1 00:10:27.943 [2024-05-14 04:07:42.360542] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:27.943 [2024-05-14 04:07:42.450354] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:27.943 [2024-05-14 04:07:42.454841] accel_dsa_rpc.c: 50:rpc_dsa_scan_accel_module: *NOTICE*: Enabled DSA user-mode 00:10:27.943 [2024-05-14 04:07:42.462827] accel_iaa_rpc.c: 33:rpc_iaa_scan_accel_module: *NOTICE*: Enabled IAA user-mode 00:10:37.994 04:07:51 -- accel/accel.sh@18 -- # out='Preparing input file... 00:10:37.994 00:10:37.994 SPDK Configuration: 00:10:37.994 Core mask: 0x1 00:10:37.994 00:10:37.994 Accel Perf Configuration: 00:10:37.994 Workload Type: compress 00:10:37.994 Transfer size: 4096 bytes 00:10:37.994 Vector count 1 00:10:37.994 Module: iaa 00:10:37.994 File Name: /var/jenkins/workspace/dsa-phy-autotest/spdk/test/accel/bib 00:10:37.994 Queue depth: 32 00:10:37.994 Allocate depth: 32 00:10:37.994 # threads/core: 1 00:10:37.994 Run time: 1 seconds 00:10:37.994 Verify: No 00:10:37.994 00:10:37.994 Running for 1 seconds... 00:10:37.994 00:10:37.994 Core,Thread Transfers Bandwidth Failed Miscompares 00:10:37.994 ------------------------------------------------------------------------------------ 00:10:37.994 0,0 280592/s 1169 MiB/s 0 0 00:10:37.994 ==================================================================================== 00:10:37.994 Total 280592/s 1096 MiB/s 0 0' 00:10:37.994 04:07:51 -- accel/accel.sh@20 -- # IFS=: 00:10:37.994 04:07:51 -- accel/accel.sh@20 -- # read -r var val 00:10:37.994 04:07:51 -- accel/accel.sh@15 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/dsa-phy-autotest/spdk/test/accel/bib 00:10:37.994 04:07:51 -- accel/accel.sh@12 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/dsa-phy-autotest/spdk/test/accel/bib 00:10:37.994 04:07:51 -- accel/accel.sh@12 -- # build_accel_config 00:10:37.994 04:07:51 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:10:37.994 04:07:51 -- accel/accel.sh@33 -- # [[ 1 -gt 0 ]] 00:10:37.994 04:07:51 -- accel/accel.sh@33 -- # accel_json_cfg+=('{"method": "dsa_scan_accel_module"}') 00:10:37.994 04:07:51 -- accel/accel.sh@34 -- # [[ 1 -gt 0 ]] 00:10:37.994 04:07:51 -- accel/accel.sh@34 -- # accel_json_cfg+=('{"method": "iaa_scan_accel_module"}') 00:10:37.994 04:07:51 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:10:37.994 04:07:51 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:10:37.994 04:07:51 -- accel/accel.sh@41 -- # local IFS=, 00:10:37.994 04:07:51 -- accel/accel.sh@42 -- # jq -r . 00:10:37.994 [2024-05-14 04:07:51.914383] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:10:37.994 [2024-05-14 04:07:51.914504] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3867856 ] 00:10:37.994 EAL: No free 2048 kB hugepages reported on node 1 00:10:37.994 [2024-05-14 04:07:52.029730] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:37.994 [2024-05-14 04:07:52.119066] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:37.994 [2024-05-14 04:07:52.123560] accel_dsa_rpc.c: 50:rpc_dsa_scan_accel_module: *NOTICE*: Enabled DSA user-mode 00:10:37.994 [2024-05-14 04:07:52.131546] accel_iaa_rpc.c: 33:rpc_iaa_scan_accel_module: *NOTICE*: Enabled IAA user-mode 00:10:44.573 04:07:58 -- accel/accel.sh@21 -- # val= 00:10:44.573 04:07:58 -- accel/accel.sh@22 -- # case "$var" in 00:10:44.573 04:07:58 -- accel/accel.sh@20 -- # IFS=: 00:10:44.573 04:07:58 -- accel/accel.sh@20 -- # read -r var val 00:10:44.573 04:07:58 -- accel/accel.sh@21 -- # val= 00:10:44.573 04:07:58 -- accel/accel.sh@22 -- # case "$var" in 00:10:44.573 04:07:58 -- accel/accel.sh@20 -- # IFS=: 00:10:44.573 04:07:58 -- accel/accel.sh@20 -- # read -r var val 00:10:44.573 04:07:58 -- accel/accel.sh@21 -- # val= 00:10:44.573 04:07:58 -- accel/accel.sh@22 -- # case "$var" in 00:10:44.573 04:07:58 -- accel/accel.sh@20 -- # IFS=: 00:10:44.573 04:07:58 -- accel/accel.sh@20 -- # read -r var val 00:10:44.573 04:07:58 -- accel/accel.sh@21 -- # val=0x1 00:10:44.573 04:07:58 -- accel/accel.sh@22 -- # case "$var" in 00:10:44.573 04:07:58 -- accel/accel.sh@20 -- # IFS=: 00:10:44.573 04:07:58 -- accel/accel.sh@20 -- # read -r var val 00:10:44.573 04:07:58 -- accel/accel.sh@21 -- # val= 00:10:44.573 04:07:58 -- accel/accel.sh@22 -- # case "$var" in 00:10:44.573 04:07:58 -- accel/accel.sh@20 -- # IFS=: 00:10:44.573 04:07:58 -- accel/accel.sh@20 -- # read -r var val 00:10:44.573 04:07:58 -- accel/accel.sh@21 -- # val= 00:10:44.573 04:07:58 -- accel/accel.sh@22 -- # case "$var" in 00:10:44.573 04:07:58 -- accel/accel.sh@20 -- # IFS=: 00:10:44.573 04:07:58 -- accel/accel.sh@20 -- # read -r var val 00:10:44.573 04:07:58 -- accel/accel.sh@21 -- # val=compress 00:10:44.573 04:07:58 -- accel/accel.sh@22 -- # case "$var" in 00:10:44.573 04:07:58 -- accel/accel.sh@24 -- # accel_opc=compress 00:10:44.573 04:07:58 -- accel/accel.sh@20 -- # IFS=: 00:10:44.573 04:07:58 -- accel/accel.sh@20 -- # read -r var val 00:10:44.573 04:07:58 -- accel/accel.sh@21 -- # val='4096 bytes' 00:10:44.573 04:07:58 -- accel/accel.sh@22 -- # case "$var" in 00:10:44.573 04:07:58 -- accel/accel.sh@20 -- # IFS=: 00:10:44.573 04:07:58 -- accel/accel.sh@20 -- # read -r var val 00:10:44.573 04:07:58 -- accel/accel.sh@21 -- # val= 00:10:44.573 04:07:58 -- accel/accel.sh@22 -- # case "$var" in 00:10:44.573 04:07:58 -- accel/accel.sh@20 -- # IFS=: 00:10:44.573 04:07:58 -- accel/accel.sh@20 -- # read -r var val 00:10:44.573 04:07:58 -- accel/accel.sh@21 -- # val=iaa 00:10:44.573 04:07:58 -- accel/accel.sh@22 -- # case "$var" in 00:10:44.573 04:07:58 -- accel/accel.sh@23 -- # accel_module=iaa 00:10:44.573 04:07:58 -- accel/accel.sh@20 -- # IFS=: 00:10:44.573 04:07:58 -- accel/accel.sh@20 -- # read -r var val 00:10:44.573 04:07:58 -- accel/accel.sh@21 -- # val=/var/jenkins/workspace/dsa-phy-autotest/spdk/test/accel/bib 00:10:44.573 04:07:58 -- accel/accel.sh@22 -- # case "$var" in 00:10:44.573 04:07:58 -- accel/accel.sh@20 -- # IFS=: 00:10:44.573 04:07:58 -- accel/accel.sh@20 -- # read -r var val 00:10:44.573 04:07:58 -- accel/accel.sh@21 -- # val=32 00:10:44.573 04:07:58 -- accel/accel.sh@22 -- # case "$var" in 00:10:44.573 04:07:58 -- accel/accel.sh@20 -- # IFS=: 00:10:44.573 04:07:58 -- accel/accel.sh@20 -- # read -r var val 00:10:44.573 04:07:58 -- accel/accel.sh@21 -- # val=32 00:10:44.573 04:07:58 -- accel/accel.sh@22 -- # case "$var" in 00:10:44.573 04:07:58 -- accel/accel.sh@20 -- # IFS=: 00:10:44.573 04:07:58 -- accel/accel.sh@20 -- # read -r var val 00:10:44.573 04:07:58 -- accel/accel.sh@21 -- # val=1 00:10:44.573 04:07:58 -- accel/accel.sh@22 -- # case "$var" in 00:10:44.573 04:07:58 -- accel/accel.sh@20 -- # IFS=: 00:10:44.573 04:07:58 -- accel/accel.sh@20 -- # read -r var val 00:10:44.573 04:07:58 -- accel/accel.sh@21 -- # val='1 seconds' 00:10:44.573 04:07:58 -- accel/accel.sh@22 -- # case "$var" in 00:10:44.573 04:07:58 -- accel/accel.sh@20 -- # IFS=: 00:10:44.573 04:07:58 -- accel/accel.sh@20 -- # read -r var val 00:10:44.573 04:07:58 -- accel/accel.sh@21 -- # val=No 00:10:44.573 04:07:58 -- accel/accel.sh@22 -- # case "$var" in 00:10:44.573 04:07:58 -- accel/accel.sh@20 -- # IFS=: 00:10:44.573 04:07:58 -- accel/accel.sh@20 -- # read -r var val 00:10:44.573 04:07:58 -- accel/accel.sh@21 -- # val= 00:10:44.573 04:07:58 -- accel/accel.sh@22 -- # case "$var" in 00:10:44.573 04:07:58 -- accel/accel.sh@20 -- # IFS=: 00:10:44.573 04:07:58 -- accel/accel.sh@20 -- # read -r var val 00:10:44.573 04:07:58 -- accel/accel.sh@21 -- # val= 00:10:44.573 04:07:58 -- accel/accel.sh@22 -- # case "$var" in 00:10:44.573 04:07:58 -- accel/accel.sh@20 -- # IFS=: 00:10:44.573 04:07:58 -- accel/accel.sh@20 -- # read -r var val 00:10:47.107 04:08:01 -- accel/accel.sh@21 -- # val= 00:10:47.107 04:08:01 -- accel/accel.sh@22 -- # case "$var" in 00:10:47.107 04:08:01 -- accel/accel.sh@20 -- # IFS=: 00:10:47.107 04:08:01 -- accel/accel.sh@20 -- # read -r var val 00:10:47.107 04:08:01 -- accel/accel.sh@21 -- # val= 00:10:47.107 04:08:01 -- accel/accel.sh@22 -- # case "$var" in 00:10:47.107 04:08:01 -- accel/accel.sh@20 -- # IFS=: 00:10:47.107 04:08:01 -- accel/accel.sh@20 -- # read -r var val 00:10:47.107 04:08:01 -- accel/accel.sh@21 -- # val= 00:10:47.107 04:08:01 -- accel/accel.sh@22 -- # case "$var" in 00:10:47.107 04:08:01 -- accel/accel.sh@20 -- # IFS=: 00:10:47.107 04:08:01 -- accel/accel.sh@20 -- # read -r var val 00:10:47.107 04:08:01 -- accel/accel.sh@21 -- # val= 00:10:47.107 04:08:01 -- accel/accel.sh@22 -- # case "$var" in 00:10:47.107 04:08:01 -- accel/accel.sh@20 -- # IFS=: 00:10:47.107 04:08:01 -- accel/accel.sh@20 -- # read -r var val 00:10:47.107 04:08:01 -- accel/accel.sh@21 -- # val= 00:10:47.107 04:08:01 -- accel/accel.sh@22 -- # case "$var" in 00:10:47.107 04:08:01 -- accel/accel.sh@20 -- # IFS=: 00:10:47.107 04:08:01 -- accel/accel.sh@20 -- # read -r var val 00:10:47.107 04:08:01 -- accel/accel.sh@21 -- # val= 00:10:47.107 04:08:01 -- accel/accel.sh@22 -- # case "$var" in 00:10:47.107 04:08:01 -- accel/accel.sh@20 -- # IFS=: 00:10:47.107 04:08:01 -- accel/accel.sh@20 -- # read -r var val 00:10:47.107 04:08:01 -- accel/accel.sh@28 -- # [[ -n iaa ]] 00:10:47.107 04:08:01 -- accel/accel.sh@28 -- # [[ -n compress ]] 00:10:47.107 04:08:01 -- accel/accel.sh@28 -- # [[ iaa == \i\a\a ]] 00:10:47.107 00:10:47.107 real 0m19.339s 00:10:47.107 user 0m6.534s 00:10:47.107 sys 0m0.455s 00:10:47.107 04:08:01 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:47.107 04:08:01 -- common/autotest_common.sh@10 -- # set +x 00:10:47.107 ************************************ 00:10:47.107 END TEST accel_comp 00:10:47.107 ************************************ 00:10:47.107 04:08:01 -- accel/accel.sh@109 -- # run_test accel_decomp accel_test -t 1 -w decompress -l /var/jenkins/workspace/dsa-phy-autotest/spdk/test/accel/bib -y 00:10:47.107 04:08:01 -- common/autotest_common.sh@1077 -- # '[' 9 -le 1 ']' 00:10:47.107 04:08:01 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:10:47.107 04:08:01 -- common/autotest_common.sh@10 -- # set +x 00:10:47.107 ************************************ 00:10:47.107 START TEST accel_decomp 00:10:47.107 ************************************ 00:10:47.107 04:08:01 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/dsa-phy-autotest/spdk/test/accel/bib -y 00:10:47.107 04:08:01 -- accel/accel.sh@16 -- # local accel_opc 00:10:47.107 04:08:01 -- accel/accel.sh@17 -- # local accel_module 00:10:47.107 04:08:01 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/dsa-phy-autotest/spdk/test/accel/bib -y 00:10:47.107 04:08:01 -- accel/accel.sh@12 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/dsa-phy-autotest/spdk/test/accel/bib -y 00:10:47.107 04:08:01 -- accel/accel.sh@12 -- # build_accel_config 00:10:47.107 04:08:01 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:10:47.107 04:08:01 -- accel/accel.sh@33 -- # [[ 1 -gt 0 ]] 00:10:47.107 04:08:01 -- accel/accel.sh@33 -- # accel_json_cfg+=('{"method": "dsa_scan_accel_module"}') 00:10:47.107 04:08:01 -- accel/accel.sh@34 -- # [[ 1 -gt 0 ]] 00:10:47.107 04:08:01 -- accel/accel.sh@34 -- # accel_json_cfg+=('{"method": "iaa_scan_accel_module"}') 00:10:47.107 04:08:01 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:10:47.107 04:08:01 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:10:47.107 04:08:01 -- accel/accel.sh@41 -- # local IFS=, 00:10:47.107 04:08:01 -- accel/accel.sh@42 -- # jq -r . 00:10:47.107 [2024-05-14 04:08:01.618220] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:10:47.107 [2024-05-14 04:08:01.618336] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3869970 ] 00:10:47.107 EAL: No free 2048 kB hugepages reported on node 1 00:10:47.368 [2024-05-14 04:08:01.731263] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:47.368 [2024-05-14 04:08:01.821821] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:47.368 [2024-05-14 04:08:01.826418] accel_dsa_rpc.c: 50:rpc_dsa_scan_accel_module: *NOTICE*: Enabled DSA user-mode 00:10:47.368 [2024-05-14 04:08:01.834405] accel_iaa_rpc.c: 33:rpc_iaa_scan_accel_module: *NOTICE*: Enabled IAA user-mode 00:10:57.356 04:08:11 -- accel/accel.sh@18 -- # out='Preparing input file... 00:10:57.356 00:10:57.356 SPDK Configuration: 00:10:57.356 Core mask: 0x1 00:10:57.357 00:10:57.357 Accel Perf Configuration: 00:10:57.357 Workload Type: decompress 00:10:57.357 Transfer size: 4096 bytes 00:10:57.357 Vector count 1 00:10:57.357 Module: iaa 00:10:57.357 File Name: /var/jenkins/workspace/dsa-phy-autotest/spdk/test/accel/bib 00:10:57.357 Queue depth: 32 00:10:57.357 Allocate depth: 32 00:10:57.357 # threads/core: 1 00:10:57.357 Run time: 1 seconds 00:10:57.357 Verify: Yes 00:10:57.357 00:10:57.357 Running for 1 seconds... 00:10:57.357 00:10:57.357 Core,Thread Transfers Bandwidth Failed Miscompares 00:10:57.357 ------------------------------------------------------------------------------------ 00:10:57.357 0,0 289904/s 657 MiB/s 0 0 00:10:57.357 ==================================================================================== 00:10:57.357 Total 289904/s 1132 MiB/s 0 0' 00:10:57.357 04:08:11 -- accel/accel.sh@20 -- # IFS=: 00:10:57.357 04:08:11 -- accel/accel.sh@20 -- # read -r var val 00:10:57.357 04:08:11 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/dsa-phy-autotest/spdk/test/accel/bib -y 00:10:57.357 04:08:11 -- accel/accel.sh@12 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/dsa-phy-autotest/spdk/test/accel/bib -y 00:10:57.357 04:08:11 -- accel/accel.sh@12 -- # build_accel_config 00:10:57.357 04:08:11 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:10:57.357 04:08:11 -- accel/accel.sh@33 -- # [[ 1 -gt 0 ]] 00:10:57.357 04:08:11 -- accel/accel.sh@33 -- # accel_json_cfg+=('{"method": "dsa_scan_accel_module"}') 00:10:57.357 04:08:11 -- accel/accel.sh@34 -- # [[ 1 -gt 0 ]] 00:10:57.357 04:08:11 -- accel/accel.sh@34 -- # accel_json_cfg+=('{"method": "iaa_scan_accel_module"}') 00:10:57.357 04:08:11 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:10:57.357 04:08:11 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:10:57.357 04:08:11 -- accel/accel.sh@41 -- # local IFS=, 00:10:57.357 04:08:11 -- accel/accel.sh@42 -- # jq -r . 00:10:57.357 [2024-05-14 04:08:11.276265] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:10:57.357 [2024-05-14 04:08:11.276380] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3871794 ] 00:10:57.357 EAL: No free 2048 kB hugepages reported on node 1 00:10:57.357 [2024-05-14 04:08:11.387430] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:57.357 [2024-05-14 04:08:11.476824] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:57.357 [2024-05-14 04:08:11.481329] accel_dsa_rpc.c: 50:rpc_dsa_scan_accel_module: *NOTICE*: Enabled DSA user-mode 00:10:57.357 [2024-05-14 04:08:11.489315] accel_iaa_rpc.c: 33:rpc_iaa_scan_accel_module: *NOTICE*: Enabled IAA user-mode 00:11:03.939 04:08:17 -- accel/accel.sh@21 -- # val= 00:11:03.939 04:08:17 -- accel/accel.sh@22 -- # case "$var" in 00:11:03.939 04:08:17 -- accel/accel.sh@20 -- # IFS=: 00:11:03.939 04:08:17 -- accel/accel.sh@20 -- # read -r var val 00:11:03.939 04:08:17 -- accel/accel.sh@21 -- # val= 00:11:03.939 04:08:17 -- accel/accel.sh@22 -- # case "$var" in 00:11:03.939 04:08:17 -- accel/accel.sh@20 -- # IFS=: 00:11:03.939 04:08:17 -- accel/accel.sh@20 -- # read -r var val 00:11:03.939 04:08:17 -- accel/accel.sh@21 -- # val= 00:11:03.939 04:08:17 -- accel/accel.sh@22 -- # case "$var" in 00:11:03.939 04:08:17 -- accel/accel.sh@20 -- # IFS=: 00:11:03.939 04:08:17 -- accel/accel.sh@20 -- # read -r var val 00:11:03.939 04:08:17 -- accel/accel.sh@21 -- # val=0x1 00:11:03.939 04:08:17 -- accel/accel.sh@22 -- # case "$var" in 00:11:03.939 04:08:17 -- accel/accel.sh@20 -- # IFS=: 00:11:03.939 04:08:17 -- accel/accel.sh@20 -- # read -r var val 00:11:03.939 04:08:17 -- accel/accel.sh@21 -- # val= 00:11:03.939 04:08:17 -- accel/accel.sh@22 -- # case "$var" in 00:11:03.939 04:08:17 -- accel/accel.sh@20 -- # IFS=: 00:11:03.939 04:08:17 -- accel/accel.sh@20 -- # read -r var val 00:11:03.939 04:08:17 -- accel/accel.sh@21 -- # val= 00:11:03.939 04:08:17 -- accel/accel.sh@22 -- # case "$var" in 00:11:03.939 04:08:17 -- accel/accel.sh@20 -- # IFS=: 00:11:03.939 04:08:17 -- accel/accel.sh@20 -- # read -r var val 00:11:03.939 04:08:17 -- accel/accel.sh@21 -- # val=decompress 00:11:03.939 04:08:17 -- accel/accel.sh@22 -- # case "$var" in 00:11:03.939 04:08:17 -- accel/accel.sh@24 -- # accel_opc=decompress 00:11:03.939 04:08:17 -- accel/accel.sh@20 -- # IFS=: 00:11:03.939 04:08:17 -- accel/accel.sh@20 -- # read -r var val 00:11:03.939 04:08:17 -- accel/accel.sh@21 -- # val='4096 bytes' 00:11:03.939 04:08:17 -- accel/accel.sh@22 -- # case "$var" in 00:11:03.939 04:08:17 -- accel/accel.sh@20 -- # IFS=: 00:11:03.939 04:08:17 -- accel/accel.sh@20 -- # read -r var val 00:11:03.939 04:08:17 -- accel/accel.sh@21 -- # val= 00:11:03.939 04:08:17 -- accel/accel.sh@22 -- # case "$var" in 00:11:03.939 04:08:17 -- accel/accel.sh@20 -- # IFS=: 00:11:03.939 04:08:17 -- accel/accel.sh@20 -- # read -r var val 00:11:03.939 04:08:17 -- accel/accel.sh@21 -- # val=iaa 00:11:03.939 04:08:17 -- accel/accel.sh@22 -- # case "$var" in 00:11:03.939 04:08:17 -- accel/accel.sh@23 -- # accel_module=iaa 00:11:03.939 04:08:17 -- accel/accel.sh@20 -- # IFS=: 00:11:03.939 04:08:17 -- accel/accel.sh@20 -- # read -r var val 00:11:03.939 04:08:17 -- accel/accel.sh@21 -- # val=/var/jenkins/workspace/dsa-phy-autotest/spdk/test/accel/bib 00:11:03.939 04:08:17 -- accel/accel.sh@22 -- # case "$var" in 00:11:03.939 04:08:17 -- accel/accel.sh@20 -- # IFS=: 00:11:03.939 04:08:17 -- accel/accel.sh@20 -- # read -r var val 00:11:03.939 04:08:17 -- accel/accel.sh@21 -- # val=32 00:11:03.939 04:08:17 -- accel/accel.sh@22 -- # case "$var" in 00:11:03.939 04:08:17 -- accel/accel.sh@20 -- # IFS=: 00:11:03.939 04:08:17 -- accel/accel.sh@20 -- # read -r var val 00:11:03.939 04:08:17 -- accel/accel.sh@21 -- # val=32 00:11:03.939 04:08:17 -- accel/accel.sh@22 -- # case "$var" in 00:11:03.939 04:08:17 -- accel/accel.sh@20 -- # IFS=: 00:11:03.939 04:08:17 -- accel/accel.sh@20 -- # read -r var val 00:11:03.939 04:08:17 -- accel/accel.sh@21 -- # val=1 00:11:03.939 04:08:17 -- accel/accel.sh@22 -- # case "$var" in 00:11:03.939 04:08:17 -- accel/accel.sh@20 -- # IFS=: 00:11:03.939 04:08:17 -- accel/accel.sh@20 -- # read -r var val 00:11:03.939 04:08:17 -- accel/accel.sh@21 -- # val='1 seconds' 00:11:03.939 04:08:17 -- accel/accel.sh@22 -- # case "$var" in 00:11:03.939 04:08:17 -- accel/accel.sh@20 -- # IFS=: 00:11:03.939 04:08:17 -- accel/accel.sh@20 -- # read -r var val 00:11:03.939 04:08:17 -- accel/accel.sh@21 -- # val=Yes 00:11:03.939 04:08:17 -- accel/accel.sh@22 -- # case "$var" in 00:11:03.939 04:08:17 -- accel/accel.sh@20 -- # IFS=: 00:11:03.939 04:08:17 -- accel/accel.sh@20 -- # read -r var val 00:11:03.939 04:08:17 -- accel/accel.sh@21 -- # val= 00:11:03.939 04:08:17 -- accel/accel.sh@22 -- # case "$var" in 00:11:03.939 04:08:17 -- accel/accel.sh@20 -- # IFS=: 00:11:03.939 04:08:17 -- accel/accel.sh@20 -- # read -r var val 00:11:03.940 04:08:17 -- accel/accel.sh@21 -- # val= 00:11:03.940 04:08:17 -- accel/accel.sh@22 -- # case "$var" in 00:11:03.940 04:08:17 -- accel/accel.sh@20 -- # IFS=: 00:11:03.940 04:08:17 -- accel/accel.sh@20 -- # read -r var val 00:11:06.481 04:08:20 -- accel/accel.sh@21 -- # val= 00:11:06.481 04:08:20 -- accel/accel.sh@22 -- # case "$var" in 00:11:06.481 04:08:20 -- accel/accel.sh@20 -- # IFS=: 00:11:06.481 04:08:20 -- accel/accel.sh@20 -- # read -r var val 00:11:06.481 04:08:20 -- accel/accel.sh@21 -- # val= 00:11:06.481 04:08:20 -- accel/accel.sh@22 -- # case "$var" in 00:11:06.481 04:08:20 -- accel/accel.sh@20 -- # IFS=: 00:11:06.481 04:08:20 -- accel/accel.sh@20 -- # read -r var val 00:11:06.481 04:08:20 -- accel/accel.sh@21 -- # val= 00:11:06.481 04:08:20 -- accel/accel.sh@22 -- # case "$var" in 00:11:06.481 04:08:20 -- accel/accel.sh@20 -- # IFS=: 00:11:06.481 04:08:20 -- accel/accel.sh@20 -- # read -r var val 00:11:06.481 04:08:20 -- accel/accel.sh@21 -- # val= 00:11:06.481 04:08:20 -- accel/accel.sh@22 -- # case "$var" in 00:11:06.481 04:08:20 -- accel/accel.sh@20 -- # IFS=: 00:11:06.481 04:08:20 -- accel/accel.sh@20 -- # read -r var val 00:11:06.481 04:08:20 -- accel/accel.sh@21 -- # val= 00:11:06.481 04:08:20 -- accel/accel.sh@22 -- # case "$var" in 00:11:06.481 04:08:20 -- accel/accel.sh@20 -- # IFS=: 00:11:06.481 04:08:20 -- accel/accel.sh@20 -- # read -r var val 00:11:06.481 04:08:20 -- accel/accel.sh@21 -- # val= 00:11:06.481 04:08:20 -- accel/accel.sh@22 -- # case "$var" in 00:11:06.481 04:08:20 -- accel/accel.sh@20 -- # IFS=: 00:11:06.481 04:08:20 -- accel/accel.sh@20 -- # read -r var val 00:11:06.481 04:08:20 -- accel/accel.sh@28 -- # [[ -n iaa ]] 00:11:06.481 04:08:20 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:11:06.481 04:08:20 -- accel/accel.sh@28 -- # [[ iaa == \i\a\a ]] 00:11:06.481 00:11:06.481 real 0m19.337s 00:11:06.481 user 0m6.539s 00:11:06.481 sys 0m0.454s 00:11:06.481 04:08:20 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:06.481 04:08:20 -- common/autotest_common.sh@10 -- # set +x 00:11:06.481 ************************************ 00:11:06.481 END TEST accel_decomp 00:11:06.481 ************************************ 00:11:06.481 04:08:20 -- accel/accel.sh@110 -- # run_test accel_decmop_full accel_test -t 1 -w decompress -l /var/jenkins/workspace/dsa-phy-autotest/spdk/test/accel/bib -y -o 0 00:11:06.481 04:08:20 -- common/autotest_common.sh@1077 -- # '[' 11 -le 1 ']' 00:11:06.481 04:08:20 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:11:06.481 04:08:20 -- common/autotest_common.sh@10 -- # set +x 00:11:06.481 ************************************ 00:11:06.481 START TEST accel_decmop_full 00:11:06.481 ************************************ 00:11:06.481 04:08:20 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/dsa-phy-autotest/spdk/test/accel/bib -y -o 0 00:11:06.481 04:08:20 -- accel/accel.sh@16 -- # local accel_opc 00:11:06.481 04:08:20 -- accel/accel.sh@17 -- # local accel_module 00:11:06.481 04:08:20 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/dsa-phy-autotest/spdk/test/accel/bib -y -o 0 00:11:06.481 04:08:20 -- accel/accel.sh@12 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/dsa-phy-autotest/spdk/test/accel/bib -y -o 0 00:11:06.481 04:08:20 -- accel/accel.sh@12 -- # build_accel_config 00:11:06.481 04:08:20 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:11:06.481 04:08:20 -- accel/accel.sh@33 -- # [[ 1 -gt 0 ]] 00:11:06.481 04:08:20 -- accel/accel.sh@33 -- # accel_json_cfg+=('{"method": "dsa_scan_accel_module"}') 00:11:06.481 04:08:20 -- accel/accel.sh@34 -- # [[ 1 -gt 0 ]] 00:11:06.481 04:08:20 -- accel/accel.sh@34 -- # accel_json_cfg+=('{"method": "iaa_scan_accel_module"}') 00:11:06.481 04:08:20 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:11:06.481 04:08:20 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:11:06.481 04:08:20 -- accel/accel.sh@41 -- # local IFS=, 00:11:06.481 04:08:20 -- accel/accel.sh@42 -- # jq -r . 00:11:06.481 [2024-05-14 04:08:20.987346] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:11:06.481 [2024-05-14 04:08:20.987458] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3873753 ] 00:11:06.481 EAL: No free 2048 kB hugepages reported on node 1 00:11:06.741 [2024-05-14 04:08:21.099156] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:06.741 [2024-05-14 04:08:21.193083] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:06.741 [2024-05-14 04:08:21.197668] accel_dsa_rpc.c: 50:rpc_dsa_scan_accel_module: *NOTICE*: Enabled DSA user-mode 00:11:06.741 [2024-05-14 04:08:21.205658] accel_iaa_rpc.c: 33:rpc_iaa_scan_accel_module: *NOTICE*: Enabled IAA user-mode 00:11:16.748 04:08:30 -- accel/accel.sh@18 -- # out='Preparing input file... 00:11:16.748 00:11:16.748 SPDK Configuration: 00:11:16.748 Core mask: 0x1 00:11:16.748 00:11:16.748 Accel Perf Configuration: 00:11:16.748 Workload Type: decompress 00:11:16.748 Transfer size: 111250 bytes 00:11:16.748 Vector count 1 00:11:16.748 Module: iaa 00:11:16.748 File Name: /var/jenkins/workspace/dsa-phy-autotest/spdk/test/accel/bib 00:11:16.748 Queue depth: 32 00:11:16.748 Allocate depth: 32 00:11:16.748 # threads/core: 1 00:11:16.748 Run time: 1 seconds 00:11:16.748 Verify: Yes 00:11:16.748 00:11:16.748 Running for 1 seconds... 00:11:16.748 00:11:16.748 Core,Thread Transfers Bandwidth Failed Miscompares 00:11:16.748 ------------------------------------------------------------------------------------ 00:11:16.748 0,0 107312/s 6049 MiB/s 0 0 00:11:16.748 ==================================================================================== 00:11:16.748 Total 107312/s 11385 MiB/s 0 0' 00:11:16.748 04:08:30 -- accel/accel.sh@20 -- # IFS=: 00:11:16.748 04:08:30 -- accel/accel.sh@20 -- # read -r var val 00:11:16.748 04:08:30 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/dsa-phy-autotest/spdk/test/accel/bib -y -o 0 00:11:16.748 04:08:30 -- accel/accel.sh@12 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/dsa-phy-autotest/spdk/test/accel/bib -y -o 0 00:11:16.748 04:08:30 -- accel/accel.sh@12 -- # build_accel_config 00:11:16.748 04:08:30 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:11:16.748 04:08:30 -- accel/accel.sh@33 -- # [[ 1 -gt 0 ]] 00:11:16.748 04:08:30 -- accel/accel.sh@33 -- # accel_json_cfg+=('{"method": "dsa_scan_accel_module"}') 00:11:16.748 04:08:30 -- accel/accel.sh@34 -- # [[ 1 -gt 0 ]] 00:11:16.748 04:08:30 -- accel/accel.sh@34 -- # accel_json_cfg+=('{"method": "iaa_scan_accel_module"}') 00:11:16.748 04:08:30 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:11:16.748 04:08:30 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:11:16.748 04:08:30 -- accel/accel.sh@41 -- # local IFS=, 00:11:16.748 04:08:30 -- accel/accel.sh@42 -- # jq -r . 00:11:16.748 [2024-05-14 04:08:30.654386] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:11:16.748 [2024-05-14 04:08:30.654522] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3875728 ] 00:11:16.748 EAL: No free 2048 kB hugepages reported on node 1 00:11:16.748 [2024-05-14 04:08:30.770204] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:16.748 [2024-05-14 04:08:30.859293] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:16.748 [2024-05-14 04:08:30.863812] accel_dsa_rpc.c: 50:rpc_dsa_scan_accel_module: *NOTICE*: Enabled DSA user-mode 00:11:16.748 [2024-05-14 04:08:30.871793] accel_iaa_rpc.c: 33:rpc_iaa_scan_accel_module: *NOTICE*: Enabled IAA user-mode 00:11:23.326 04:08:37 -- accel/accel.sh@21 -- # val= 00:11:23.326 04:08:37 -- accel/accel.sh@22 -- # case "$var" in 00:11:23.326 04:08:37 -- accel/accel.sh@20 -- # IFS=: 00:11:23.326 04:08:37 -- accel/accel.sh@20 -- # read -r var val 00:11:23.326 04:08:37 -- accel/accel.sh@21 -- # val= 00:11:23.326 04:08:37 -- accel/accel.sh@22 -- # case "$var" in 00:11:23.326 04:08:37 -- accel/accel.sh@20 -- # IFS=: 00:11:23.326 04:08:37 -- accel/accel.sh@20 -- # read -r var val 00:11:23.326 04:08:37 -- accel/accel.sh@21 -- # val= 00:11:23.326 04:08:37 -- accel/accel.sh@22 -- # case "$var" in 00:11:23.326 04:08:37 -- accel/accel.sh@20 -- # IFS=: 00:11:23.326 04:08:37 -- accel/accel.sh@20 -- # read -r var val 00:11:23.326 04:08:37 -- accel/accel.sh@21 -- # val=0x1 00:11:23.326 04:08:37 -- accel/accel.sh@22 -- # case "$var" in 00:11:23.326 04:08:37 -- accel/accel.sh@20 -- # IFS=: 00:11:23.326 04:08:37 -- accel/accel.sh@20 -- # read -r var val 00:11:23.326 04:08:37 -- accel/accel.sh@21 -- # val= 00:11:23.326 04:08:37 -- accel/accel.sh@22 -- # case "$var" in 00:11:23.326 04:08:37 -- accel/accel.sh@20 -- # IFS=: 00:11:23.326 04:08:37 -- accel/accel.sh@20 -- # read -r var val 00:11:23.326 04:08:37 -- accel/accel.sh@21 -- # val= 00:11:23.326 04:08:37 -- accel/accel.sh@22 -- # case "$var" in 00:11:23.326 04:08:37 -- accel/accel.sh@20 -- # IFS=: 00:11:23.326 04:08:37 -- accel/accel.sh@20 -- # read -r var val 00:11:23.326 04:08:37 -- accel/accel.sh@21 -- # val=decompress 00:11:23.326 04:08:37 -- accel/accel.sh@22 -- # case "$var" in 00:11:23.326 04:08:37 -- accel/accel.sh@24 -- # accel_opc=decompress 00:11:23.326 04:08:37 -- accel/accel.sh@20 -- # IFS=: 00:11:23.326 04:08:37 -- accel/accel.sh@20 -- # read -r var val 00:11:23.326 04:08:37 -- accel/accel.sh@21 -- # val='111250 bytes' 00:11:23.326 04:08:37 -- accel/accel.sh@22 -- # case "$var" in 00:11:23.326 04:08:37 -- accel/accel.sh@20 -- # IFS=: 00:11:23.326 04:08:37 -- accel/accel.sh@20 -- # read -r var val 00:11:23.326 04:08:37 -- accel/accel.sh@21 -- # val= 00:11:23.326 04:08:37 -- accel/accel.sh@22 -- # case "$var" in 00:11:23.326 04:08:37 -- accel/accel.sh@20 -- # IFS=: 00:11:23.326 04:08:37 -- accel/accel.sh@20 -- # read -r var val 00:11:23.326 04:08:37 -- accel/accel.sh@21 -- # val=iaa 00:11:23.326 04:08:37 -- accel/accel.sh@22 -- # case "$var" in 00:11:23.326 04:08:37 -- accel/accel.sh@23 -- # accel_module=iaa 00:11:23.326 04:08:37 -- accel/accel.sh@20 -- # IFS=: 00:11:23.326 04:08:37 -- accel/accel.sh@20 -- # read -r var val 00:11:23.326 04:08:37 -- accel/accel.sh@21 -- # val=/var/jenkins/workspace/dsa-phy-autotest/spdk/test/accel/bib 00:11:23.326 04:08:37 -- accel/accel.sh@22 -- # case "$var" in 00:11:23.326 04:08:37 -- accel/accel.sh@20 -- # IFS=: 00:11:23.326 04:08:37 -- accel/accel.sh@20 -- # read -r var val 00:11:23.326 04:08:37 -- accel/accel.sh@21 -- # val=32 00:11:23.326 04:08:37 -- accel/accel.sh@22 -- # case "$var" in 00:11:23.326 04:08:37 -- accel/accel.sh@20 -- # IFS=: 00:11:23.326 04:08:37 -- accel/accel.sh@20 -- # read -r var val 00:11:23.326 04:08:37 -- accel/accel.sh@21 -- # val=32 00:11:23.326 04:08:37 -- accel/accel.sh@22 -- # case "$var" in 00:11:23.326 04:08:37 -- accel/accel.sh@20 -- # IFS=: 00:11:23.326 04:08:37 -- accel/accel.sh@20 -- # read -r var val 00:11:23.326 04:08:37 -- accel/accel.sh@21 -- # val=1 00:11:23.326 04:08:37 -- accel/accel.sh@22 -- # case "$var" in 00:11:23.326 04:08:37 -- accel/accel.sh@20 -- # IFS=: 00:11:23.326 04:08:37 -- accel/accel.sh@20 -- # read -r var val 00:11:23.326 04:08:37 -- accel/accel.sh@21 -- # val='1 seconds' 00:11:23.326 04:08:37 -- accel/accel.sh@22 -- # case "$var" in 00:11:23.326 04:08:37 -- accel/accel.sh@20 -- # IFS=: 00:11:23.326 04:08:37 -- accel/accel.sh@20 -- # read -r var val 00:11:23.326 04:08:37 -- accel/accel.sh@21 -- # val=Yes 00:11:23.326 04:08:37 -- accel/accel.sh@22 -- # case "$var" in 00:11:23.326 04:08:37 -- accel/accel.sh@20 -- # IFS=: 00:11:23.326 04:08:37 -- accel/accel.sh@20 -- # read -r var val 00:11:23.326 04:08:37 -- accel/accel.sh@21 -- # val= 00:11:23.326 04:08:37 -- accel/accel.sh@22 -- # case "$var" in 00:11:23.326 04:08:37 -- accel/accel.sh@20 -- # IFS=: 00:11:23.326 04:08:37 -- accel/accel.sh@20 -- # read -r var val 00:11:23.326 04:08:37 -- accel/accel.sh@21 -- # val= 00:11:23.326 04:08:37 -- accel/accel.sh@22 -- # case "$var" in 00:11:23.326 04:08:37 -- accel/accel.sh@20 -- # IFS=: 00:11:23.326 04:08:37 -- accel/accel.sh@20 -- # read -r var val 00:11:25.869 04:08:40 -- accel/accel.sh@21 -- # val= 00:11:25.869 04:08:40 -- accel/accel.sh@22 -- # case "$var" in 00:11:25.869 04:08:40 -- accel/accel.sh@20 -- # IFS=: 00:11:25.869 04:08:40 -- accel/accel.sh@20 -- # read -r var val 00:11:25.869 04:08:40 -- accel/accel.sh@21 -- # val= 00:11:25.869 04:08:40 -- accel/accel.sh@22 -- # case "$var" in 00:11:25.869 04:08:40 -- accel/accel.sh@20 -- # IFS=: 00:11:25.869 04:08:40 -- accel/accel.sh@20 -- # read -r var val 00:11:25.869 04:08:40 -- accel/accel.sh@21 -- # val= 00:11:25.869 04:08:40 -- accel/accel.sh@22 -- # case "$var" in 00:11:25.869 04:08:40 -- accel/accel.sh@20 -- # IFS=: 00:11:25.869 04:08:40 -- accel/accel.sh@20 -- # read -r var val 00:11:25.869 04:08:40 -- accel/accel.sh@21 -- # val= 00:11:25.869 04:08:40 -- accel/accel.sh@22 -- # case "$var" in 00:11:25.869 04:08:40 -- accel/accel.sh@20 -- # IFS=: 00:11:25.869 04:08:40 -- accel/accel.sh@20 -- # read -r var val 00:11:25.869 04:08:40 -- accel/accel.sh@21 -- # val= 00:11:25.869 04:08:40 -- accel/accel.sh@22 -- # case "$var" in 00:11:25.869 04:08:40 -- accel/accel.sh@20 -- # IFS=: 00:11:25.869 04:08:40 -- accel/accel.sh@20 -- # read -r var val 00:11:25.869 04:08:40 -- accel/accel.sh@21 -- # val= 00:11:25.869 04:08:40 -- accel/accel.sh@22 -- # case "$var" in 00:11:25.869 04:08:40 -- accel/accel.sh@20 -- # IFS=: 00:11:25.869 04:08:40 -- accel/accel.sh@20 -- # read -r var val 00:11:25.869 04:08:40 -- accel/accel.sh@28 -- # [[ -n iaa ]] 00:11:25.869 04:08:40 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:11:25.869 04:08:40 -- accel/accel.sh@28 -- # [[ iaa == \i\a\a ]] 00:11:25.869 00:11:25.869 real 0m19.364s 00:11:25.869 user 0m6.570s 00:11:25.869 sys 0m0.440s 00:11:25.869 04:08:40 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:25.869 04:08:40 -- common/autotest_common.sh@10 -- # set +x 00:11:25.869 ************************************ 00:11:25.869 END TEST accel_decmop_full 00:11:25.869 ************************************ 00:11:25.869 04:08:40 -- accel/accel.sh@111 -- # run_test accel_decomp_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/dsa-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:11:25.869 04:08:40 -- common/autotest_common.sh@1077 -- # '[' 11 -le 1 ']' 00:11:25.869 04:08:40 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:11:25.869 04:08:40 -- common/autotest_common.sh@10 -- # set +x 00:11:25.869 ************************************ 00:11:25.869 START TEST accel_decomp_mcore 00:11:25.869 ************************************ 00:11:25.869 04:08:40 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/dsa-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:11:25.869 04:08:40 -- accel/accel.sh@16 -- # local accel_opc 00:11:25.869 04:08:40 -- accel/accel.sh@17 -- # local accel_module 00:11:25.869 04:08:40 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/dsa-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:11:25.869 04:08:40 -- accel/accel.sh@12 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/dsa-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:11:25.869 04:08:40 -- accel/accel.sh@12 -- # build_accel_config 00:11:25.869 04:08:40 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:11:25.869 04:08:40 -- accel/accel.sh@33 -- # [[ 1 -gt 0 ]] 00:11:25.869 04:08:40 -- accel/accel.sh@33 -- # accel_json_cfg+=('{"method": "dsa_scan_accel_module"}') 00:11:25.869 04:08:40 -- accel/accel.sh@34 -- # [[ 1 -gt 0 ]] 00:11:25.869 04:08:40 -- accel/accel.sh@34 -- # accel_json_cfg+=('{"method": "iaa_scan_accel_module"}') 00:11:25.869 04:08:40 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:11:25.869 04:08:40 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:11:25.869 04:08:40 -- accel/accel.sh@41 -- # local IFS=, 00:11:25.869 04:08:40 -- accel/accel.sh@42 -- # jq -r . 00:11:25.869 [2024-05-14 04:08:40.383985] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:11:25.869 [2024-05-14 04:08:40.384103] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3877556 ] 00:11:26.128 EAL: No free 2048 kB hugepages reported on node 1 00:11:26.128 [2024-05-14 04:08:40.505488] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:26.128 [2024-05-14 04:08:40.605088] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:11:26.128 [2024-05-14 04:08:40.605114] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:11:26.128 [2024-05-14 04:08:40.605132] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:26.128 [2024-05-14 04:08:40.605135] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:11:26.128 [2024-05-14 04:08:40.609758] accel_dsa_rpc.c: 50:rpc_dsa_scan_accel_module: *NOTICE*: Enabled DSA user-mode 00:11:26.128 [2024-05-14 04:08:40.617749] accel_iaa_rpc.c: 33:rpc_iaa_scan_accel_module: *NOTICE*: Enabled IAA user-mode 00:11:36.115 04:08:50 -- accel/accel.sh@18 -- # out='Preparing input file... 00:11:36.115 00:11:36.115 SPDK Configuration: 00:11:36.115 Core mask: 0xf 00:11:36.115 00:11:36.115 Accel Perf Configuration: 00:11:36.115 Workload Type: decompress 00:11:36.115 Transfer size: 4096 bytes 00:11:36.115 Vector count 1 00:11:36.115 Module: iaa 00:11:36.115 File Name: /var/jenkins/workspace/dsa-phy-autotest/spdk/test/accel/bib 00:11:36.115 Queue depth: 32 00:11:36.115 Allocate depth: 32 00:11:36.115 # threads/core: 1 00:11:36.115 Run time: 1 seconds 00:11:36.115 Verify: Yes 00:11:36.115 00:11:36.115 Running for 1 seconds... 00:11:36.115 00:11:36.115 Core,Thread Transfers Bandwidth Failed Miscompares 00:11:36.115 ------------------------------------------------------------------------------------ 00:11:36.115 0,0 113280/s 257 MiB/s 0 0 00:11:36.115 3,0 112624/s 255 MiB/s 0 0 00:11:36.115 2,0 114688/s 260 MiB/s 0 0 00:11:36.115 1,0 115184/s 261 MiB/s 0 0 00:11:36.115 ==================================================================================== 00:11:36.115 Total 455776/s 1780 MiB/s 0 0' 00:11:36.115 04:08:50 -- accel/accel.sh@20 -- # IFS=: 00:11:36.115 04:08:50 -- accel/accel.sh@20 -- # read -r var val 00:11:36.115 04:08:50 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/dsa-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:11:36.115 04:08:50 -- accel/accel.sh@12 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/dsa-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:11:36.115 04:08:50 -- accel/accel.sh@12 -- # build_accel_config 00:11:36.115 04:08:50 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:11:36.115 04:08:50 -- accel/accel.sh@33 -- # [[ 1 -gt 0 ]] 00:11:36.115 04:08:50 -- accel/accel.sh@33 -- # accel_json_cfg+=('{"method": "dsa_scan_accel_module"}') 00:11:36.115 04:08:50 -- accel/accel.sh@34 -- # [[ 1 -gt 0 ]] 00:11:36.115 04:08:50 -- accel/accel.sh@34 -- # accel_json_cfg+=('{"method": "iaa_scan_accel_module"}') 00:11:36.115 04:08:50 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:11:36.115 04:08:50 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:11:36.115 04:08:50 -- accel/accel.sh@41 -- # local IFS=, 00:11:36.115 04:08:50 -- accel/accel.sh@42 -- # jq -r . 00:11:36.115 [2024-05-14 04:08:50.097807] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:11:36.115 [2024-05-14 04:08:50.097921] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3879611 ] 00:11:36.115 EAL: No free 2048 kB hugepages reported on node 1 00:11:36.115 [2024-05-14 04:08:50.208606] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:36.115 [2024-05-14 04:08:50.300872] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:11:36.115 [2024-05-14 04:08:50.300999] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:11:36.115 [2024-05-14 04:08:50.301099] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:36.115 [2024-05-14 04:08:50.301111] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:11:36.115 [2024-05-14 04:08:50.305626] accel_dsa_rpc.c: 50:rpc_dsa_scan_accel_module: *NOTICE*: Enabled DSA user-mode 00:11:36.115 [2024-05-14 04:08:50.313624] accel_iaa_rpc.c: 33:rpc_iaa_scan_accel_module: *NOTICE*: Enabled IAA user-mode 00:11:42.738 04:08:56 -- accel/accel.sh@21 -- # val= 00:11:42.738 04:08:56 -- accel/accel.sh@22 -- # case "$var" in 00:11:42.738 04:08:56 -- accel/accel.sh@20 -- # IFS=: 00:11:42.738 04:08:56 -- accel/accel.sh@20 -- # read -r var val 00:11:42.738 04:08:56 -- accel/accel.sh@21 -- # val= 00:11:42.738 04:08:56 -- accel/accel.sh@22 -- # case "$var" in 00:11:42.738 04:08:56 -- accel/accel.sh@20 -- # IFS=: 00:11:42.738 04:08:56 -- accel/accel.sh@20 -- # read -r var val 00:11:42.738 04:08:56 -- accel/accel.sh@21 -- # val= 00:11:42.738 04:08:56 -- accel/accel.sh@22 -- # case "$var" in 00:11:42.738 04:08:56 -- accel/accel.sh@20 -- # IFS=: 00:11:42.738 04:08:56 -- accel/accel.sh@20 -- # read -r var val 00:11:42.738 04:08:56 -- accel/accel.sh@21 -- # val=0xf 00:11:42.738 04:08:56 -- accel/accel.sh@22 -- # case "$var" in 00:11:42.738 04:08:56 -- accel/accel.sh@20 -- # IFS=: 00:11:42.738 04:08:56 -- accel/accel.sh@20 -- # read -r var val 00:11:42.738 04:08:56 -- accel/accel.sh@21 -- # val= 00:11:42.738 04:08:56 -- accel/accel.sh@22 -- # case "$var" in 00:11:42.738 04:08:56 -- accel/accel.sh@20 -- # IFS=: 00:11:42.738 04:08:56 -- accel/accel.sh@20 -- # read -r var val 00:11:42.738 04:08:56 -- accel/accel.sh@21 -- # val= 00:11:42.738 04:08:56 -- accel/accel.sh@22 -- # case "$var" in 00:11:42.738 04:08:56 -- accel/accel.sh@20 -- # IFS=: 00:11:42.738 04:08:56 -- accel/accel.sh@20 -- # read -r var val 00:11:42.738 04:08:56 -- accel/accel.sh@21 -- # val=decompress 00:11:42.738 04:08:56 -- accel/accel.sh@22 -- # case "$var" in 00:11:42.738 04:08:56 -- accel/accel.sh@24 -- # accel_opc=decompress 00:11:42.738 04:08:56 -- accel/accel.sh@20 -- # IFS=: 00:11:42.738 04:08:56 -- accel/accel.sh@20 -- # read -r var val 00:11:42.738 04:08:56 -- accel/accel.sh@21 -- # val='4096 bytes' 00:11:42.738 04:08:56 -- accel/accel.sh@22 -- # case "$var" in 00:11:42.738 04:08:56 -- accel/accel.sh@20 -- # IFS=: 00:11:42.738 04:08:56 -- accel/accel.sh@20 -- # read -r var val 00:11:42.738 04:08:56 -- accel/accel.sh@21 -- # val= 00:11:42.738 04:08:56 -- accel/accel.sh@22 -- # case "$var" in 00:11:42.738 04:08:56 -- accel/accel.sh@20 -- # IFS=: 00:11:42.738 04:08:56 -- accel/accel.sh@20 -- # read -r var val 00:11:42.738 04:08:56 -- accel/accel.sh@21 -- # val=iaa 00:11:42.738 04:08:56 -- accel/accel.sh@22 -- # case "$var" in 00:11:42.738 04:08:56 -- accel/accel.sh@23 -- # accel_module=iaa 00:11:42.738 04:08:56 -- accel/accel.sh@20 -- # IFS=: 00:11:42.738 04:08:56 -- accel/accel.sh@20 -- # read -r var val 00:11:42.738 04:08:56 -- accel/accel.sh@21 -- # val=/var/jenkins/workspace/dsa-phy-autotest/spdk/test/accel/bib 00:11:42.738 04:08:56 -- accel/accel.sh@22 -- # case "$var" in 00:11:42.738 04:08:56 -- accel/accel.sh@20 -- # IFS=: 00:11:42.738 04:08:56 -- accel/accel.sh@20 -- # read -r var val 00:11:42.738 04:08:56 -- accel/accel.sh@21 -- # val=32 00:11:42.738 04:08:56 -- accel/accel.sh@22 -- # case "$var" in 00:11:42.738 04:08:56 -- accel/accel.sh@20 -- # IFS=: 00:11:42.738 04:08:56 -- accel/accel.sh@20 -- # read -r var val 00:11:42.738 04:08:56 -- accel/accel.sh@21 -- # val=32 00:11:42.738 04:08:56 -- accel/accel.sh@22 -- # case "$var" in 00:11:42.738 04:08:56 -- accel/accel.sh@20 -- # IFS=: 00:11:42.738 04:08:56 -- accel/accel.sh@20 -- # read -r var val 00:11:42.738 04:08:56 -- accel/accel.sh@21 -- # val=1 00:11:42.738 04:08:56 -- accel/accel.sh@22 -- # case "$var" in 00:11:42.738 04:08:56 -- accel/accel.sh@20 -- # IFS=: 00:11:42.738 04:08:56 -- accel/accel.sh@20 -- # read -r var val 00:11:42.738 04:08:56 -- accel/accel.sh@21 -- # val='1 seconds' 00:11:42.738 04:08:56 -- accel/accel.sh@22 -- # case "$var" in 00:11:42.738 04:08:56 -- accel/accel.sh@20 -- # IFS=: 00:11:42.738 04:08:56 -- accel/accel.sh@20 -- # read -r var val 00:11:42.738 04:08:56 -- accel/accel.sh@21 -- # val=Yes 00:11:42.738 04:08:56 -- accel/accel.sh@22 -- # case "$var" in 00:11:42.738 04:08:56 -- accel/accel.sh@20 -- # IFS=: 00:11:42.738 04:08:56 -- accel/accel.sh@20 -- # read -r var val 00:11:42.738 04:08:56 -- accel/accel.sh@21 -- # val= 00:11:42.738 04:08:56 -- accel/accel.sh@22 -- # case "$var" in 00:11:42.738 04:08:56 -- accel/accel.sh@20 -- # IFS=: 00:11:42.738 04:08:56 -- accel/accel.sh@20 -- # read -r var val 00:11:42.738 04:08:56 -- accel/accel.sh@21 -- # val= 00:11:42.738 04:08:56 -- accel/accel.sh@22 -- # case "$var" in 00:11:42.738 04:08:56 -- accel/accel.sh@20 -- # IFS=: 00:11:42.738 04:08:56 -- accel/accel.sh@20 -- # read -r var val 00:11:45.276 04:08:59 -- accel/accel.sh@21 -- # val= 00:11:45.276 04:08:59 -- accel/accel.sh@22 -- # case "$var" in 00:11:45.276 04:08:59 -- accel/accel.sh@20 -- # IFS=: 00:11:45.276 04:08:59 -- accel/accel.sh@20 -- # read -r var val 00:11:45.276 04:08:59 -- accel/accel.sh@21 -- # val= 00:11:45.276 04:08:59 -- accel/accel.sh@22 -- # case "$var" in 00:11:45.276 04:08:59 -- accel/accel.sh@20 -- # IFS=: 00:11:45.276 04:08:59 -- accel/accel.sh@20 -- # read -r var val 00:11:45.276 04:08:59 -- accel/accel.sh@21 -- # val= 00:11:45.276 04:08:59 -- accel/accel.sh@22 -- # case "$var" in 00:11:45.276 04:08:59 -- accel/accel.sh@20 -- # IFS=: 00:11:45.276 04:08:59 -- accel/accel.sh@20 -- # read -r var val 00:11:45.276 04:08:59 -- accel/accel.sh@21 -- # val= 00:11:45.276 04:08:59 -- accel/accel.sh@22 -- # case "$var" in 00:11:45.276 04:08:59 -- accel/accel.sh@20 -- # IFS=: 00:11:45.276 04:08:59 -- accel/accel.sh@20 -- # read -r var val 00:11:45.276 04:08:59 -- accel/accel.sh@21 -- # val= 00:11:45.276 04:08:59 -- accel/accel.sh@22 -- # case "$var" in 00:11:45.276 04:08:59 -- accel/accel.sh@20 -- # IFS=: 00:11:45.276 04:08:59 -- accel/accel.sh@20 -- # read -r var val 00:11:45.276 04:08:59 -- accel/accel.sh@21 -- # val= 00:11:45.276 04:08:59 -- accel/accel.sh@22 -- # case "$var" in 00:11:45.276 04:08:59 -- accel/accel.sh@20 -- # IFS=: 00:11:45.276 04:08:59 -- accel/accel.sh@20 -- # read -r var val 00:11:45.276 04:08:59 -- accel/accel.sh@21 -- # val= 00:11:45.276 04:08:59 -- accel/accel.sh@22 -- # case "$var" in 00:11:45.276 04:08:59 -- accel/accel.sh@20 -- # IFS=: 00:11:45.276 04:08:59 -- accel/accel.sh@20 -- # read -r var val 00:11:45.276 04:08:59 -- accel/accel.sh@21 -- # val= 00:11:45.276 04:08:59 -- accel/accel.sh@22 -- # case "$var" in 00:11:45.276 04:08:59 -- accel/accel.sh@20 -- # IFS=: 00:11:45.276 04:08:59 -- accel/accel.sh@20 -- # read -r var val 00:11:45.276 04:08:59 -- accel/accel.sh@21 -- # val= 00:11:45.276 04:08:59 -- accel/accel.sh@22 -- # case "$var" in 00:11:45.276 04:08:59 -- accel/accel.sh@20 -- # IFS=: 00:11:45.276 04:08:59 -- accel/accel.sh@20 -- # read -r var val 00:11:45.276 04:08:59 -- accel/accel.sh@28 -- # [[ -n iaa ]] 00:11:45.276 04:08:59 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:11:45.276 04:08:59 -- accel/accel.sh@28 -- # [[ iaa == \i\a\a ]] 00:11:45.276 00:11:45.276 real 0m19.401s 00:11:45.276 user 1m2.094s 00:11:45.276 sys 0m0.494s 00:11:45.276 04:08:59 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:45.276 04:08:59 -- common/autotest_common.sh@10 -- # set +x 00:11:45.276 ************************************ 00:11:45.276 END TEST accel_decomp_mcore 00:11:45.276 ************************************ 00:11:45.276 04:08:59 -- accel/accel.sh@112 -- # run_test accel_decomp_full_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/dsa-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:11:45.276 04:08:59 -- common/autotest_common.sh@1077 -- # '[' 13 -le 1 ']' 00:11:45.276 04:08:59 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:11:45.276 04:08:59 -- common/autotest_common.sh@10 -- # set +x 00:11:45.276 ************************************ 00:11:45.276 START TEST accel_decomp_full_mcore 00:11:45.276 ************************************ 00:11:45.276 04:08:59 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/dsa-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:11:45.276 04:08:59 -- accel/accel.sh@16 -- # local accel_opc 00:11:45.276 04:08:59 -- accel/accel.sh@17 -- # local accel_module 00:11:45.276 04:08:59 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/dsa-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:11:45.276 04:08:59 -- accel/accel.sh@12 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/dsa-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:11:45.276 04:08:59 -- accel/accel.sh@12 -- # build_accel_config 00:11:45.276 04:08:59 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:11:45.276 04:08:59 -- accel/accel.sh@33 -- # [[ 1 -gt 0 ]] 00:11:45.276 04:08:59 -- accel/accel.sh@33 -- # accel_json_cfg+=('{"method": "dsa_scan_accel_module"}') 00:11:45.276 04:08:59 -- accel/accel.sh@34 -- # [[ 1 -gt 0 ]] 00:11:45.276 04:08:59 -- accel/accel.sh@34 -- # accel_json_cfg+=('{"method": "iaa_scan_accel_module"}') 00:11:45.276 04:08:59 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:11:45.276 04:08:59 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:11:45.276 04:08:59 -- accel/accel.sh@41 -- # local IFS=, 00:11:45.276 04:08:59 -- accel/accel.sh@42 -- # jq -r . 00:11:45.276 [2024-05-14 04:08:59.818786] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:11:45.276 [2024-05-14 04:08:59.818901] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3881503 ] 00:11:45.537 EAL: No free 2048 kB hugepages reported on node 1 00:11:45.537 [2024-05-14 04:08:59.931742] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:45.537 [2024-05-14 04:09:00.035140] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:11:45.537 [2024-05-14 04:09:00.035256] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:11:45.537 [2024-05-14 04:09:00.035288] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:11:45.537 [2024-05-14 04:09:00.035277] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:45.537 [2024-05-14 04:09:00.040123] accel_dsa_rpc.c: 50:rpc_dsa_scan_accel_module: *NOTICE*: Enabled DSA user-mode 00:11:45.537 [2024-05-14 04:09:00.048102] accel_iaa_rpc.c: 33:rpc_iaa_scan_accel_module: *NOTICE*: Enabled IAA user-mode 00:11:55.516 04:09:09 -- accel/accel.sh@18 -- # out='Preparing input file... 00:11:55.516 00:11:55.516 SPDK Configuration: 00:11:55.516 Core mask: 0xf 00:11:55.516 00:11:55.516 Accel Perf Configuration: 00:11:55.516 Workload Type: decompress 00:11:55.516 Transfer size: 111250 bytes 00:11:55.516 Vector count 1 00:11:55.516 Module: iaa 00:11:55.516 File Name: /var/jenkins/workspace/dsa-phy-autotest/spdk/test/accel/bib 00:11:55.516 Queue depth: 32 00:11:55.516 Allocate depth: 32 00:11:55.516 # threads/core: 1 00:11:55.516 Run time: 1 seconds 00:11:55.516 Verify: Yes 00:11:55.516 00:11:55.516 Running for 1 seconds... 00:11:55.516 00:11:55.516 Core,Thread Transfers Bandwidth Failed Miscompares 00:11:55.516 ------------------------------------------------------------------------------------ 00:11:55.516 0,0 85249/s 4806 MiB/s 0 0 00:11:55.516 3,0 82454/s 4648 MiB/s 0 0 00:11:55.516 2,0 86626/s 4883 MiB/s 0 0 00:11:55.516 1,0 85793/s 4836 MiB/s 0 0 00:11:55.516 ==================================================================================== 00:11:55.516 Total 340122/s 36085 MiB/s 0 0' 00:11:55.516 04:09:09 -- accel/accel.sh@20 -- # IFS=: 00:11:55.516 04:09:09 -- accel/accel.sh@20 -- # read -r var val 00:11:55.516 04:09:09 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/dsa-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:11:55.516 04:09:09 -- accel/accel.sh@12 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/dsa-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:11:55.516 04:09:09 -- accel/accel.sh@12 -- # build_accel_config 00:11:55.516 04:09:09 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:11:55.516 04:09:09 -- accel/accel.sh@33 -- # [[ 1 -gt 0 ]] 00:11:55.516 04:09:09 -- accel/accel.sh@33 -- # accel_json_cfg+=('{"method": "dsa_scan_accel_module"}') 00:11:55.516 04:09:09 -- accel/accel.sh@34 -- # [[ 1 -gt 0 ]] 00:11:55.516 04:09:09 -- accel/accel.sh@34 -- # accel_json_cfg+=('{"method": "iaa_scan_accel_module"}') 00:11:55.516 04:09:09 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:11:55.516 04:09:09 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:11:55.516 04:09:09 -- accel/accel.sh@41 -- # local IFS=, 00:11:55.516 04:09:09 -- accel/accel.sh@42 -- # jq -r . 00:11:55.516 [2024-05-14 04:09:09.556564] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:11:55.516 [2024-05-14 04:09:09.556687] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3883871 ] 00:11:55.516 EAL: No free 2048 kB hugepages reported on node 1 00:11:55.516 [2024-05-14 04:09:09.673753] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:55.516 [2024-05-14 04:09:09.772178] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:11:55.516 [2024-05-14 04:09:09.772208] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:11:55.516 [2024-05-14 04:09:09.772242] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:11:55.516 [2024-05-14 04:09:09.772230] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:55.516 [2024-05-14 04:09:09.776884] accel_dsa_rpc.c: 50:rpc_dsa_scan_accel_module: *NOTICE*: Enabled DSA user-mode 00:11:55.516 [2024-05-14 04:09:09.784875] accel_iaa_rpc.c: 33:rpc_iaa_scan_accel_module: *NOTICE*: Enabled IAA user-mode 00:12:02.097 04:09:16 -- accel/accel.sh@21 -- # val= 00:12:02.097 04:09:16 -- accel/accel.sh@22 -- # case "$var" in 00:12:02.097 04:09:16 -- accel/accel.sh@20 -- # IFS=: 00:12:02.097 04:09:16 -- accel/accel.sh@20 -- # read -r var val 00:12:02.097 04:09:16 -- accel/accel.sh@21 -- # val= 00:12:02.097 04:09:16 -- accel/accel.sh@22 -- # case "$var" in 00:12:02.097 04:09:16 -- accel/accel.sh@20 -- # IFS=: 00:12:02.097 04:09:16 -- accel/accel.sh@20 -- # read -r var val 00:12:02.097 04:09:16 -- accel/accel.sh@21 -- # val= 00:12:02.097 04:09:16 -- accel/accel.sh@22 -- # case "$var" in 00:12:02.097 04:09:16 -- accel/accel.sh@20 -- # IFS=: 00:12:02.097 04:09:16 -- accel/accel.sh@20 -- # read -r var val 00:12:02.097 04:09:16 -- accel/accel.sh@21 -- # val=0xf 00:12:02.097 04:09:16 -- accel/accel.sh@22 -- # case "$var" in 00:12:02.097 04:09:16 -- accel/accel.sh@20 -- # IFS=: 00:12:02.097 04:09:16 -- accel/accel.sh@20 -- # read -r var val 00:12:02.097 04:09:16 -- accel/accel.sh@21 -- # val= 00:12:02.097 04:09:16 -- accel/accel.sh@22 -- # case "$var" in 00:12:02.097 04:09:16 -- accel/accel.sh@20 -- # IFS=: 00:12:02.097 04:09:16 -- accel/accel.sh@20 -- # read -r var val 00:12:02.097 04:09:16 -- accel/accel.sh@21 -- # val= 00:12:02.097 04:09:16 -- accel/accel.sh@22 -- # case "$var" in 00:12:02.097 04:09:16 -- accel/accel.sh@20 -- # IFS=: 00:12:02.097 04:09:16 -- accel/accel.sh@20 -- # read -r var val 00:12:02.097 04:09:16 -- accel/accel.sh@21 -- # val=decompress 00:12:02.097 04:09:16 -- accel/accel.sh@22 -- # case "$var" in 00:12:02.097 04:09:16 -- accel/accel.sh@24 -- # accel_opc=decompress 00:12:02.097 04:09:16 -- accel/accel.sh@20 -- # IFS=: 00:12:02.097 04:09:16 -- accel/accel.sh@20 -- # read -r var val 00:12:02.097 04:09:16 -- accel/accel.sh@21 -- # val='111250 bytes' 00:12:02.097 04:09:16 -- accel/accel.sh@22 -- # case "$var" in 00:12:02.097 04:09:16 -- accel/accel.sh@20 -- # IFS=: 00:12:02.097 04:09:16 -- accel/accel.sh@20 -- # read -r var val 00:12:02.097 04:09:16 -- accel/accel.sh@21 -- # val= 00:12:02.097 04:09:16 -- accel/accel.sh@22 -- # case "$var" in 00:12:02.097 04:09:16 -- accel/accel.sh@20 -- # IFS=: 00:12:02.097 04:09:16 -- accel/accel.sh@20 -- # read -r var val 00:12:02.097 04:09:16 -- accel/accel.sh@21 -- # val=iaa 00:12:02.097 04:09:16 -- accel/accel.sh@22 -- # case "$var" in 00:12:02.097 04:09:16 -- accel/accel.sh@23 -- # accel_module=iaa 00:12:02.097 04:09:16 -- accel/accel.sh@20 -- # IFS=: 00:12:02.097 04:09:16 -- accel/accel.sh@20 -- # read -r var val 00:12:02.097 04:09:16 -- accel/accel.sh@21 -- # val=/var/jenkins/workspace/dsa-phy-autotest/spdk/test/accel/bib 00:12:02.097 04:09:16 -- accel/accel.sh@22 -- # case "$var" in 00:12:02.097 04:09:16 -- accel/accel.sh@20 -- # IFS=: 00:12:02.097 04:09:16 -- accel/accel.sh@20 -- # read -r var val 00:12:02.097 04:09:16 -- accel/accel.sh@21 -- # val=32 00:12:02.097 04:09:16 -- accel/accel.sh@22 -- # case "$var" in 00:12:02.097 04:09:16 -- accel/accel.sh@20 -- # IFS=: 00:12:02.097 04:09:16 -- accel/accel.sh@20 -- # read -r var val 00:12:02.097 04:09:16 -- accel/accel.sh@21 -- # val=32 00:12:02.097 04:09:16 -- accel/accel.sh@22 -- # case "$var" in 00:12:02.097 04:09:16 -- accel/accel.sh@20 -- # IFS=: 00:12:02.097 04:09:16 -- accel/accel.sh@20 -- # read -r var val 00:12:02.097 04:09:16 -- accel/accel.sh@21 -- # val=1 00:12:02.097 04:09:16 -- accel/accel.sh@22 -- # case "$var" in 00:12:02.097 04:09:16 -- accel/accel.sh@20 -- # IFS=: 00:12:02.097 04:09:16 -- accel/accel.sh@20 -- # read -r var val 00:12:02.097 04:09:16 -- accel/accel.sh@21 -- # val='1 seconds' 00:12:02.097 04:09:16 -- accel/accel.sh@22 -- # case "$var" in 00:12:02.097 04:09:16 -- accel/accel.sh@20 -- # IFS=: 00:12:02.097 04:09:16 -- accel/accel.sh@20 -- # read -r var val 00:12:02.097 04:09:16 -- accel/accel.sh@21 -- # val=Yes 00:12:02.097 04:09:16 -- accel/accel.sh@22 -- # case "$var" in 00:12:02.097 04:09:16 -- accel/accel.sh@20 -- # IFS=: 00:12:02.097 04:09:16 -- accel/accel.sh@20 -- # read -r var val 00:12:02.097 04:09:16 -- accel/accel.sh@21 -- # val= 00:12:02.097 04:09:16 -- accel/accel.sh@22 -- # case "$var" in 00:12:02.097 04:09:16 -- accel/accel.sh@20 -- # IFS=: 00:12:02.097 04:09:16 -- accel/accel.sh@20 -- # read -r var val 00:12:02.097 04:09:16 -- accel/accel.sh@21 -- # val= 00:12:02.097 04:09:16 -- accel/accel.sh@22 -- # case "$var" in 00:12:02.097 04:09:16 -- accel/accel.sh@20 -- # IFS=: 00:12:02.097 04:09:16 -- accel/accel.sh@20 -- # read -r var val 00:12:05.395 04:09:19 -- accel/accel.sh@21 -- # val= 00:12:05.396 04:09:19 -- accel/accel.sh@22 -- # case "$var" in 00:12:05.396 04:09:19 -- accel/accel.sh@20 -- # IFS=: 00:12:05.396 04:09:19 -- accel/accel.sh@20 -- # read -r var val 00:12:05.396 04:09:19 -- accel/accel.sh@21 -- # val= 00:12:05.396 04:09:19 -- accel/accel.sh@22 -- # case "$var" in 00:12:05.396 04:09:19 -- accel/accel.sh@20 -- # IFS=: 00:12:05.396 04:09:19 -- accel/accel.sh@20 -- # read -r var val 00:12:05.396 04:09:19 -- accel/accel.sh@21 -- # val= 00:12:05.396 04:09:19 -- accel/accel.sh@22 -- # case "$var" in 00:12:05.396 04:09:19 -- accel/accel.sh@20 -- # IFS=: 00:12:05.396 04:09:19 -- accel/accel.sh@20 -- # read -r var val 00:12:05.396 04:09:19 -- accel/accel.sh@21 -- # val= 00:12:05.396 04:09:19 -- accel/accel.sh@22 -- # case "$var" in 00:12:05.396 04:09:19 -- accel/accel.sh@20 -- # IFS=: 00:12:05.396 04:09:19 -- accel/accel.sh@20 -- # read -r var val 00:12:05.396 04:09:19 -- accel/accel.sh@21 -- # val= 00:12:05.396 04:09:19 -- accel/accel.sh@22 -- # case "$var" in 00:12:05.396 04:09:19 -- accel/accel.sh@20 -- # IFS=: 00:12:05.396 04:09:19 -- accel/accel.sh@20 -- # read -r var val 00:12:05.396 04:09:19 -- accel/accel.sh@21 -- # val= 00:12:05.396 04:09:19 -- accel/accel.sh@22 -- # case "$var" in 00:12:05.396 04:09:19 -- accel/accel.sh@20 -- # IFS=: 00:12:05.396 04:09:19 -- accel/accel.sh@20 -- # read -r var val 00:12:05.396 04:09:19 -- accel/accel.sh@21 -- # val= 00:12:05.396 04:09:19 -- accel/accel.sh@22 -- # case "$var" in 00:12:05.396 04:09:19 -- accel/accel.sh@20 -- # IFS=: 00:12:05.396 04:09:19 -- accel/accel.sh@20 -- # read -r var val 00:12:05.396 04:09:19 -- accel/accel.sh@21 -- # val= 00:12:05.396 04:09:19 -- accel/accel.sh@22 -- # case "$var" in 00:12:05.396 04:09:19 -- accel/accel.sh@20 -- # IFS=: 00:12:05.396 04:09:19 -- accel/accel.sh@20 -- # read -r var val 00:12:05.396 04:09:19 -- accel/accel.sh@21 -- # val= 00:12:05.396 04:09:19 -- accel/accel.sh@22 -- # case "$var" in 00:12:05.396 04:09:19 -- accel/accel.sh@20 -- # IFS=: 00:12:05.396 04:09:19 -- accel/accel.sh@20 -- # read -r var val 00:12:05.396 04:09:19 -- accel/accel.sh@28 -- # [[ -n iaa ]] 00:12:05.396 04:09:19 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:12:05.396 04:09:19 -- accel/accel.sh@28 -- # [[ iaa == \i\a\a ]] 00:12:05.396 00:12:05.396 real 0m19.475s 00:12:05.396 user 1m2.295s 00:12:05.396 sys 0m0.528s 00:12:05.396 04:09:19 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:05.396 04:09:19 -- common/autotest_common.sh@10 -- # set +x 00:12:05.396 ************************************ 00:12:05.396 END TEST accel_decomp_full_mcore 00:12:05.396 ************************************ 00:12:05.396 04:09:19 -- accel/accel.sh@113 -- # run_test accel_decomp_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/dsa-phy-autotest/spdk/test/accel/bib -y -T 2 00:12:05.396 04:09:19 -- common/autotest_common.sh@1077 -- # '[' 11 -le 1 ']' 00:12:05.396 04:09:19 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:12:05.396 04:09:19 -- common/autotest_common.sh@10 -- # set +x 00:12:05.396 ************************************ 00:12:05.396 START TEST accel_decomp_mthread 00:12:05.396 ************************************ 00:12:05.396 04:09:19 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/dsa-phy-autotest/spdk/test/accel/bib -y -T 2 00:12:05.396 04:09:19 -- accel/accel.sh@16 -- # local accel_opc 00:12:05.396 04:09:19 -- accel/accel.sh@17 -- # local accel_module 00:12:05.396 04:09:19 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/dsa-phy-autotest/spdk/test/accel/bib -y -T 2 00:12:05.396 04:09:19 -- accel/accel.sh@12 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/dsa-phy-autotest/spdk/test/accel/bib -y -T 2 00:12:05.396 04:09:19 -- accel/accel.sh@12 -- # build_accel_config 00:12:05.396 04:09:19 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:12:05.396 04:09:19 -- accel/accel.sh@33 -- # [[ 1 -gt 0 ]] 00:12:05.396 04:09:19 -- accel/accel.sh@33 -- # accel_json_cfg+=('{"method": "dsa_scan_accel_module"}') 00:12:05.396 04:09:19 -- accel/accel.sh@34 -- # [[ 1 -gt 0 ]] 00:12:05.396 04:09:19 -- accel/accel.sh@34 -- # accel_json_cfg+=('{"method": "iaa_scan_accel_module"}') 00:12:05.396 04:09:19 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:12:05.396 04:09:19 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:12:05.396 04:09:19 -- accel/accel.sh@41 -- # local IFS=, 00:12:05.396 04:09:19 -- accel/accel.sh@42 -- # jq -r . 00:12:05.396 [2024-05-14 04:09:19.322577] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:12:05.396 [2024-05-14 04:09:19.322694] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3885966 ] 00:12:05.396 EAL: No free 2048 kB hugepages reported on node 1 00:12:05.396 [2024-05-14 04:09:19.438568] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:05.396 [2024-05-14 04:09:19.527261] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:05.396 [2024-05-14 04:09:19.531795] accel_dsa_rpc.c: 50:rpc_dsa_scan_accel_module: *NOTICE*: Enabled DSA user-mode 00:12:05.396 [2024-05-14 04:09:19.539780] accel_iaa_rpc.c: 33:rpc_iaa_scan_accel_module: *NOTICE*: Enabled IAA user-mode 00:12:15.448 04:09:28 -- accel/accel.sh@18 -- # out='Preparing input file... 00:12:15.448 00:12:15.448 SPDK Configuration: 00:12:15.448 Core mask: 0x1 00:12:15.448 00:12:15.448 Accel Perf Configuration: 00:12:15.448 Workload Type: decompress 00:12:15.449 Transfer size: 4096 bytes 00:12:15.449 Vector count 1 00:12:15.449 Module: iaa 00:12:15.449 File Name: /var/jenkins/workspace/dsa-phy-autotest/spdk/test/accel/bib 00:12:15.449 Queue depth: 32 00:12:15.449 Allocate depth: 32 00:12:15.449 # threads/core: 2 00:12:15.449 Run time: 1 seconds 00:12:15.449 Verify: Yes 00:12:15.449 00:12:15.449 Running for 1 seconds... 00:12:15.449 00:12:15.449 Core,Thread Transfers Bandwidth Failed Miscompares 00:12:15.449 ------------------------------------------------------------------------------------ 00:12:15.449 0,1 141776/s 321 MiB/s 0 0 00:12:15.449 0,0 140784/s 319 MiB/s 0 0 00:12:15.449 ==================================================================================== 00:12:15.449 Total 282560/s 1103 MiB/s 0 0' 00:12:15.449 04:09:28 -- accel/accel.sh@20 -- # IFS=: 00:12:15.449 04:09:28 -- accel/accel.sh@20 -- # read -r var val 00:12:15.449 04:09:28 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/dsa-phy-autotest/spdk/test/accel/bib -y -T 2 00:12:15.449 04:09:28 -- accel/accel.sh@12 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/dsa-phy-autotest/spdk/test/accel/bib -y -T 2 00:12:15.449 04:09:28 -- accel/accel.sh@12 -- # build_accel_config 00:12:15.449 04:09:28 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:12:15.449 04:09:28 -- accel/accel.sh@33 -- # [[ 1 -gt 0 ]] 00:12:15.449 04:09:28 -- accel/accel.sh@33 -- # accel_json_cfg+=('{"method": "dsa_scan_accel_module"}') 00:12:15.449 04:09:28 -- accel/accel.sh@34 -- # [[ 1 -gt 0 ]] 00:12:15.449 04:09:28 -- accel/accel.sh@34 -- # accel_json_cfg+=('{"method": "iaa_scan_accel_module"}') 00:12:15.449 04:09:28 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:12:15.449 04:09:28 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:12:15.449 04:09:28 -- accel/accel.sh@41 -- # local IFS=, 00:12:15.449 04:09:28 -- accel/accel.sh@42 -- # jq -r . 00:12:15.449 [2024-05-14 04:09:28.980258] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:12:15.449 [2024-05-14 04:09:28.980374] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3887781 ] 00:12:15.449 EAL: No free 2048 kB hugepages reported on node 1 00:12:15.449 [2024-05-14 04:09:29.091914] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:15.449 [2024-05-14 04:09:29.183195] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:15.449 [2024-05-14 04:09:29.187697] accel_dsa_rpc.c: 50:rpc_dsa_scan_accel_module: *NOTICE*: Enabled DSA user-mode 00:12:15.449 [2024-05-14 04:09:29.195682] accel_iaa_rpc.c: 33:rpc_iaa_scan_accel_module: *NOTICE*: Enabled IAA user-mode 00:12:22.027 04:09:35 -- accel/accel.sh@21 -- # val= 00:12:22.027 04:09:35 -- accel/accel.sh@22 -- # case "$var" in 00:12:22.027 04:09:35 -- accel/accel.sh@20 -- # IFS=: 00:12:22.027 04:09:35 -- accel/accel.sh@20 -- # read -r var val 00:12:22.027 04:09:35 -- accel/accel.sh@21 -- # val= 00:12:22.027 04:09:35 -- accel/accel.sh@22 -- # case "$var" in 00:12:22.027 04:09:35 -- accel/accel.sh@20 -- # IFS=: 00:12:22.027 04:09:35 -- accel/accel.sh@20 -- # read -r var val 00:12:22.027 04:09:35 -- accel/accel.sh@21 -- # val= 00:12:22.027 04:09:35 -- accel/accel.sh@22 -- # case "$var" in 00:12:22.027 04:09:35 -- accel/accel.sh@20 -- # IFS=: 00:12:22.027 04:09:35 -- accel/accel.sh@20 -- # read -r var val 00:12:22.027 04:09:35 -- accel/accel.sh@21 -- # val=0x1 00:12:22.027 04:09:35 -- accel/accel.sh@22 -- # case "$var" in 00:12:22.027 04:09:35 -- accel/accel.sh@20 -- # IFS=: 00:12:22.027 04:09:35 -- accel/accel.sh@20 -- # read -r var val 00:12:22.027 04:09:35 -- accel/accel.sh@21 -- # val= 00:12:22.027 04:09:35 -- accel/accel.sh@22 -- # case "$var" in 00:12:22.027 04:09:35 -- accel/accel.sh@20 -- # IFS=: 00:12:22.027 04:09:35 -- accel/accel.sh@20 -- # read -r var val 00:12:22.027 04:09:35 -- accel/accel.sh@21 -- # val= 00:12:22.027 04:09:35 -- accel/accel.sh@22 -- # case "$var" in 00:12:22.027 04:09:35 -- accel/accel.sh@20 -- # IFS=: 00:12:22.027 04:09:35 -- accel/accel.sh@20 -- # read -r var val 00:12:22.027 04:09:35 -- accel/accel.sh@21 -- # val=decompress 00:12:22.027 04:09:35 -- accel/accel.sh@22 -- # case "$var" in 00:12:22.027 04:09:35 -- accel/accel.sh@24 -- # accel_opc=decompress 00:12:22.027 04:09:35 -- accel/accel.sh@20 -- # IFS=: 00:12:22.027 04:09:35 -- accel/accel.sh@20 -- # read -r var val 00:12:22.027 04:09:35 -- accel/accel.sh@21 -- # val='4096 bytes' 00:12:22.027 04:09:35 -- accel/accel.sh@22 -- # case "$var" in 00:12:22.027 04:09:35 -- accel/accel.sh@20 -- # IFS=: 00:12:22.027 04:09:35 -- accel/accel.sh@20 -- # read -r var val 00:12:22.027 04:09:35 -- accel/accel.sh@21 -- # val= 00:12:22.027 04:09:35 -- accel/accel.sh@22 -- # case "$var" in 00:12:22.027 04:09:35 -- accel/accel.sh@20 -- # IFS=: 00:12:22.027 04:09:35 -- accel/accel.sh@20 -- # read -r var val 00:12:22.027 04:09:35 -- accel/accel.sh@21 -- # val=iaa 00:12:22.027 04:09:35 -- accel/accel.sh@22 -- # case "$var" in 00:12:22.027 04:09:35 -- accel/accel.sh@23 -- # accel_module=iaa 00:12:22.027 04:09:35 -- accel/accel.sh@20 -- # IFS=: 00:12:22.028 04:09:35 -- accel/accel.sh@20 -- # read -r var val 00:12:22.028 04:09:35 -- accel/accel.sh@21 -- # val=/var/jenkins/workspace/dsa-phy-autotest/spdk/test/accel/bib 00:12:22.028 04:09:35 -- accel/accel.sh@22 -- # case "$var" in 00:12:22.028 04:09:35 -- accel/accel.sh@20 -- # IFS=: 00:12:22.028 04:09:35 -- accel/accel.sh@20 -- # read -r var val 00:12:22.028 04:09:35 -- accel/accel.sh@21 -- # val=32 00:12:22.028 04:09:35 -- accel/accel.sh@22 -- # case "$var" in 00:12:22.028 04:09:35 -- accel/accel.sh@20 -- # IFS=: 00:12:22.028 04:09:35 -- accel/accel.sh@20 -- # read -r var val 00:12:22.028 04:09:35 -- accel/accel.sh@21 -- # val=32 00:12:22.028 04:09:35 -- accel/accel.sh@22 -- # case "$var" in 00:12:22.028 04:09:35 -- accel/accel.sh@20 -- # IFS=: 00:12:22.028 04:09:35 -- accel/accel.sh@20 -- # read -r var val 00:12:22.028 04:09:35 -- accel/accel.sh@21 -- # val=2 00:12:22.028 04:09:35 -- accel/accel.sh@22 -- # case "$var" in 00:12:22.028 04:09:35 -- accel/accel.sh@20 -- # IFS=: 00:12:22.028 04:09:35 -- accel/accel.sh@20 -- # read -r var val 00:12:22.028 04:09:35 -- accel/accel.sh@21 -- # val='1 seconds' 00:12:22.028 04:09:35 -- accel/accel.sh@22 -- # case "$var" in 00:12:22.028 04:09:35 -- accel/accel.sh@20 -- # IFS=: 00:12:22.028 04:09:35 -- accel/accel.sh@20 -- # read -r var val 00:12:22.028 04:09:35 -- accel/accel.sh@21 -- # val=Yes 00:12:22.028 04:09:35 -- accel/accel.sh@22 -- # case "$var" in 00:12:22.028 04:09:35 -- accel/accel.sh@20 -- # IFS=: 00:12:22.028 04:09:35 -- accel/accel.sh@20 -- # read -r var val 00:12:22.028 04:09:35 -- accel/accel.sh@21 -- # val= 00:12:22.028 04:09:35 -- accel/accel.sh@22 -- # case "$var" in 00:12:22.028 04:09:35 -- accel/accel.sh@20 -- # IFS=: 00:12:22.028 04:09:35 -- accel/accel.sh@20 -- # read -r var val 00:12:22.028 04:09:35 -- accel/accel.sh@21 -- # val= 00:12:22.028 04:09:35 -- accel/accel.sh@22 -- # case "$var" in 00:12:22.028 04:09:35 -- accel/accel.sh@20 -- # IFS=: 00:12:22.028 04:09:35 -- accel/accel.sh@20 -- # read -r var val 00:12:24.569 04:09:38 -- accel/accel.sh@21 -- # val= 00:12:24.569 04:09:38 -- accel/accel.sh@22 -- # case "$var" in 00:12:24.569 04:09:38 -- accel/accel.sh@20 -- # IFS=: 00:12:24.569 04:09:38 -- accel/accel.sh@20 -- # read -r var val 00:12:24.569 04:09:38 -- accel/accel.sh@21 -- # val= 00:12:24.569 04:09:38 -- accel/accel.sh@22 -- # case "$var" in 00:12:24.569 04:09:38 -- accel/accel.sh@20 -- # IFS=: 00:12:24.569 04:09:38 -- accel/accel.sh@20 -- # read -r var val 00:12:24.569 04:09:38 -- accel/accel.sh@21 -- # val= 00:12:24.569 04:09:38 -- accel/accel.sh@22 -- # case "$var" in 00:12:24.569 04:09:38 -- accel/accel.sh@20 -- # IFS=: 00:12:24.569 04:09:38 -- accel/accel.sh@20 -- # read -r var val 00:12:24.569 04:09:38 -- accel/accel.sh@21 -- # val= 00:12:24.569 04:09:38 -- accel/accel.sh@22 -- # case "$var" in 00:12:24.569 04:09:38 -- accel/accel.sh@20 -- # IFS=: 00:12:24.569 04:09:38 -- accel/accel.sh@20 -- # read -r var val 00:12:24.569 04:09:38 -- accel/accel.sh@21 -- # val= 00:12:24.569 04:09:38 -- accel/accel.sh@22 -- # case "$var" in 00:12:24.569 04:09:38 -- accel/accel.sh@20 -- # IFS=: 00:12:24.569 04:09:38 -- accel/accel.sh@20 -- # read -r var val 00:12:24.569 04:09:38 -- accel/accel.sh@21 -- # val= 00:12:24.569 04:09:38 -- accel/accel.sh@22 -- # case "$var" in 00:12:24.569 04:09:38 -- accel/accel.sh@20 -- # IFS=: 00:12:24.569 04:09:38 -- accel/accel.sh@20 -- # read -r var val 00:12:24.569 04:09:38 -- accel/accel.sh@21 -- # val= 00:12:24.569 04:09:38 -- accel/accel.sh@22 -- # case "$var" in 00:12:24.569 04:09:38 -- accel/accel.sh@20 -- # IFS=: 00:12:24.569 04:09:38 -- accel/accel.sh@20 -- # read -r var val 00:12:24.569 04:09:38 -- accel/accel.sh@28 -- # [[ -n iaa ]] 00:12:24.569 04:09:38 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:12:24.569 04:09:38 -- accel/accel.sh@28 -- # [[ iaa == \i\a\a ]] 00:12:24.569 00:12:24.569 real 0m19.323s 00:12:24.569 user 0m6.513s 00:12:24.569 sys 0m0.470s 00:12:24.569 04:09:38 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:24.569 04:09:38 -- common/autotest_common.sh@10 -- # set +x 00:12:24.569 ************************************ 00:12:24.569 END TEST accel_decomp_mthread 00:12:24.569 ************************************ 00:12:24.569 04:09:38 -- accel/accel.sh@114 -- # run_test accel_deomp_full_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/dsa-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:12:24.569 04:09:38 -- common/autotest_common.sh@1077 -- # '[' 13 -le 1 ']' 00:12:24.569 04:09:38 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:12:24.569 04:09:38 -- common/autotest_common.sh@10 -- # set +x 00:12:24.569 ************************************ 00:12:24.569 START TEST accel_deomp_full_mthread 00:12:24.569 ************************************ 00:12:24.569 04:09:38 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/dsa-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:12:24.569 04:09:38 -- accel/accel.sh@16 -- # local accel_opc 00:12:24.569 04:09:38 -- accel/accel.sh@17 -- # local accel_module 00:12:24.569 04:09:38 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/dsa-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:12:24.569 04:09:38 -- accel/accel.sh@12 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/dsa-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:12:24.569 04:09:38 -- accel/accel.sh@12 -- # build_accel_config 00:12:24.569 04:09:38 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:12:24.569 04:09:38 -- accel/accel.sh@33 -- # [[ 1 -gt 0 ]] 00:12:24.569 04:09:38 -- accel/accel.sh@33 -- # accel_json_cfg+=('{"method": "dsa_scan_accel_module"}') 00:12:24.569 04:09:38 -- accel/accel.sh@34 -- # [[ 1 -gt 0 ]] 00:12:24.569 04:09:38 -- accel/accel.sh@34 -- # accel_json_cfg+=('{"method": "iaa_scan_accel_module"}') 00:12:24.569 04:09:38 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:12:24.569 04:09:38 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:12:24.569 04:09:38 -- accel/accel.sh@41 -- # local IFS=, 00:12:24.569 04:09:38 -- accel/accel.sh@42 -- # jq -r . 00:12:24.569 [2024-05-14 04:09:38.667084] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:12:24.570 [2024-05-14 04:09:38.667172] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3889624 ] 00:12:24.570 EAL: No free 2048 kB hugepages reported on node 1 00:12:24.570 [2024-05-14 04:09:38.758280] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:24.570 [2024-05-14 04:09:38.849382] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:24.570 [2024-05-14 04:09:38.853914] accel_dsa_rpc.c: 50:rpc_dsa_scan_accel_module: *NOTICE*: Enabled DSA user-mode 00:12:24.570 [2024-05-14 04:09:38.861900] accel_iaa_rpc.c: 33:rpc_iaa_scan_accel_module: *NOTICE*: Enabled IAA user-mode 00:12:34.557 04:09:48 -- accel/accel.sh@18 -- # out='Preparing input file... 00:12:34.557 00:12:34.557 SPDK Configuration: 00:12:34.557 Core mask: 0x1 00:12:34.557 00:12:34.557 Accel Perf Configuration: 00:12:34.557 Workload Type: decompress 00:12:34.557 Transfer size: 111250 bytes 00:12:34.557 Vector count 1 00:12:34.557 Module: iaa 00:12:34.557 File Name: /var/jenkins/workspace/dsa-phy-autotest/spdk/test/accel/bib 00:12:34.557 Queue depth: 32 00:12:34.557 Allocate depth: 32 00:12:34.557 # threads/core: 2 00:12:34.557 Run time: 1 seconds 00:12:34.557 Verify: Yes 00:12:34.557 00:12:34.557 Running for 1 seconds... 00:12:34.557 00:12:34.557 Core,Thread Transfers Bandwidth Failed Miscompares 00:12:34.557 ------------------------------------------------------------------------------------ 00:12:34.557 0,1 60880/s 3432 MiB/s 0 0 00:12:34.557 0,0 60288/s 3398 MiB/s 0 0 00:12:34.557 ==================================================================================== 00:12:34.557 Total 121168/s 12855 MiB/s 0 0' 00:12:34.557 04:09:48 -- accel/accel.sh@20 -- # IFS=: 00:12:34.557 04:09:48 -- accel/accel.sh@20 -- # read -r var val 00:12:34.557 04:09:48 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/dsa-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:12:34.557 04:09:48 -- accel/accel.sh@12 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/dsa-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:12:34.557 04:09:48 -- accel/accel.sh@12 -- # build_accel_config 00:12:34.557 04:09:48 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:12:34.557 04:09:48 -- accel/accel.sh@33 -- # [[ 1 -gt 0 ]] 00:12:34.557 04:09:48 -- accel/accel.sh@33 -- # accel_json_cfg+=('{"method": "dsa_scan_accel_module"}') 00:12:34.557 04:09:48 -- accel/accel.sh@34 -- # [[ 1 -gt 0 ]] 00:12:34.557 04:09:48 -- accel/accel.sh@34 -- # accel_json_cfg+=('{"method": "iaa_scan_accel_module"}') 00:12:34.557 04:09:48 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:12:34.557 04:09:48 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:12:34.557 04:09:48 -- accel/accel.sh@41 -- # local IFS=, 00:12:34.557 04:09:48 -- accel/accel.sh@42 -- # jq -r . 00:12:34.557 [2024-05-14 04:09:48.349265] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:12:34.557 [2024-05-14 04:09:48.349386] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3891594 ] 00:12:34.557 EAL: No free 2048 kB hugepages reported on node 1 00:12:34.557 [2024-05-14 04:09:48.465901] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:34.557 [2024-05-14 04:09:48.556614] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:34.557 [2024-05-14 04:09:48.561147] accel_dsa_rpc.c: 50:rpc_dsa_scan_accel_module: *NOTICE*: Enabled DSA user-mode 00:12:34.557 [2024-05-14 04:09:48.569125] accel_iaa_rpc.c: 33:rpc_iaa_scan_accel_module: *NOTICE*: Enabled IAA user-mode 00:12:41.138 04:09:54 -- accel/accel.sh@21 -- # val= 00:12:41.138 04:09:54 -- accel/accel.sh@22 -- # case "$var" in 00:12:41.138 04:09:54 -- accel/accel.sh@20 -- # IFS=: 00:12:41.138 04:09:54 -- accel/accel.sh@20 -- # read -r var val 00:12:41.138 04:09:54 -- accel/accel.sh@21 -- # val= 00:12:41.138 04:09:54 -- accel/accel.sh@22 -- # case "$var" in 00:12:41.138 04:09:54 -- accel/accel.sh@20 -- # IFS=: 00:12:41.138 04:09:54 -- accel/accel.sh@20 -- # read -r var val 00:12:41.138 04:09:54 -- accel/accel.sh@21 -- # val= 00:12:41.138 04:09:54 -- accel/accel.sh@22 -- # case "$var" in 00:12:41.138 04:09:54 -- accel/accel.sh@20 -- # IFS=: 00:12:41.138 04:09:54 -- accel/accel.sh@20 -- # read -r var val 00:12:41.138 04:09:54 -- accel/accel.sh@21 -- # val=0x1 00:12:41.138 04:09:54 -- accel/accel.sh@22 -- # case "$var" in 00:12:41.138 04:09:54 -- accel/accel.sh@20 -- # IFS=: 00:12:41.138 04:09:54 -- accel/accel.sh@20 -- # read -r var val 00:12:41.138 04:09:54 -- accel/accel.sh@21 -- # val= 00:12:41.138 04:09:54 -- accel/accel.sh@22 -- # case "$var" in 00:12:41.138 04:09:54 -- accel/accel.sh@20 -- # IFS=: 00:12:41.138 04:09:54 -- accel/accel.sh@20 -- # read -r var val 00:12:41.138 04:09:54 -- accel/accel.sh@21 -- # val= 00:12:41.138 04:09:54 -- accel/accel.sh@22 -- # case "$var" in 00:12:41.138 04:09:54 -- accel/accel.sh@20 -- # IFS=: 00:12:41.138 04:09:54 -- accel/accel.sh@20 -- # read -r var val 00:12:41.138 04:09:54 -- accel/accel.sh@21 -- # val=decompress 00:12:41.138 04:09:54 -- accel/accel.sh@22 -- # case "$var" in 00:12:41.138 04:09:54 -- accel/accel.sh@24 -- # accel_opc=decompress 00:12:41.138 04:09:54 -- accel/accel.sh@20 -- # IFS=: 00:12:41.138 04:09:54 -- accel/accel.sh@20 -- # read -r var val 00:12:41.138 04:09:54 -- accel/accel.sh@21 -- # val='111250 bytes' 00:12:41.138 04:09:54 -- accel/accel.sh@22 -- # case "$var" in 00:12:41.138 04:09:54 -- accel/accel.sh@20 -- # IFS=: 00:12:41.138 04:09:54 -- accel/accel.sh@20 -- # read -r var val 00:12:41.138 04:09:54 -- accel/accel.sh@21 -- # val= 00:12:41.138 04:09:54 -- accel/accel.sh@22 -- # case "$var" in 00:12:41.138 04:09:54 -- accel/accel.sh@20 -- # IFS=: 00:12:41.138 04:09:54 -- accel/accel.sh@20 -- # read -r var val 00:12:41.138 04:09:54 -- accel/accel.sh@21 -- # val=iaa 00:12:41.138 04:09:54 -- accel/accel.sh@22 -- # case "$var" in 00:12:41.138 04:09:54 -- accel/accel.sh@23 -- # accel_module=iaa 00:12:41.138 04:09:54 -- accel/accel.sh@20 -- # IFS=: 00:12:41.138 04:09:54 -- accel/accel.sh@20 -- # read -r var val 00:12:41.138 04:09:54 -- accel/accel.sh@21 -- # val=/var/jenkins/workspace/dsa-phy-autotest/spdk/test/accel/bib 00:12:41.138 04:09:54 -- accel/accel.sh@22 -- # case "$var" in 00:12:41.138 04:09:54 -- accel/accel.sh@20 -- # IFS=: 00:12:41.138 04:09:54 -- accel/accel.sh@20 -- # read -r var val 00:12:41.138 04:09:54 -- accel/accel.sh@21 -- # val=32 00:12:41.138 04:09:54 -- accel/accel.sh@22 -- # case "$var" in 00:12:41.138 04:09:54 -- accel/accel.sh@20 -- # IFS=: 00:12:41.138 04:09:54 -- accel/accel.sh@20 -- # read -r var val 00:12:41.138 04:09:54 -- accel/accel.sh@21 -- # val=32 00:12:41.138 04:09:54 -- accel/accel.sh@22 -- # case "$var" in 00:12:41.138 04:09:54 -- accel/accel.sh@20 -- # IFS=: 00:12:41.138 04:09:54 -- accel/accel.sh@20 -- # read -r var val 00:12:41.138 04:09:54 -- accel/accel.sh@21 -- # val=2 00:12:41.138 04:09:54 -- accel/accel.sh@22 -- # case "$var" in 00:12:41.138 04:09:54 -- accel/accel.sh@20 -- # IFS=: 00:12:41.138 04:09:54 -- accel/accel.sh@20 -- # read -r var val 00:12:41.138 04:09:54 -- accel/accel.sh@21 -- # val='1 seconds' 00:12:41.138 04:09:54 -- accel/accel.sh@22 -- # case "$var" in 00:12:41.138 04:09:54 -- accel/accel.sh@20 -- # IFS=: 00:12:41.138 04:09:54 -- accel/accel.sh@20 -- # read -r var val 00:12:41.138 04:09:54 -- accel/accel.sh@21 -- # val=Yes 00:12:41.138 04:09:54 -- accel/accel.sh@22 -- # case "$var" in 00:12:41.138 04:09:54 -- accel/accel.sh@20 -- # IFS=: 00:12:41.138 04:09:54 -- accel/accel.sh@20 -- # read -r var val 00:12:41.138 04:09:54 -- accel/accel.sh@21 -- # val= 00:12:41.138 04:09:54 -- accel/accel.sh@22 -- # case "$var" in 00:12:41.138 04:09:54 -- accel/accel.sh@20 -- # IFS=: 00:12:41.138 04:09:54 -- accel/accel.sh@20 -- # read -r var val 00:12:41.138 04:09:54 -- accel/accel.sh@21 -- # val= 00:12:41.138 04:09:54 -- accel/accel.sh@22 -- # case "$var" in 00:12:41.138 04:09:54 -- accel/accel.sh@20 -- # IFS=: 00:12:41.138 04:09:54 -- accel/accel.sh@20 -- # read -r var val 00:12:43.682 04:09:57 -- accel/accel.sh@21 -- # val= 00:12:43.682 04:09:57 -- accel/accel.sh@22 -- # case "$var" in 00:12:43.682 04:09:57 -- accel/accel.sh@20 -- # IFS=: 00:12:43.682 04:09:57 -- accel/accel.sh@20 -- # read -r var val 00:12:43.682 04:09:57 -- accel/accel.sh@21 -- # val= 00:12:43.682 04:09:57 -- accel/accel.sh@22 -- # case "$var" in 00:12:43.682 04:09:57 -- accel/accel.sh@20 -- # IFS=: 00:12:43.682 04:09:57 -- accel/accel.sh@20 -- # read -r var val 00:12:43.682 04:09:57 -- accel/accel.sh@21 -- # val= 00:12:43.682 04:09:57 -- accel/accel.sh@22 -- # case "$var" in 00:12:43.682 04:09:57 -- accel/accel.sh@20 -- # IFS=: 00:12:43.682 04:09:57 -- accel/accel.sh@20 -- # read -r var val 00:12:43.682 04:09:57 -- accel/accel.sh@21 -- # val= 00:12:43.682 04:09:57 -- accel/accel.sh@22 -- # case "$var" in 00:12:43.682 04:09:57 -- accel/accel.sh@20 -- # IFS=: 00:12:43.682 04:09:57 -- accel/accel.sh@20 -- # read -r var val 00:12:43.682 04:09:57 -- accel/accel.sh@21 -- # val= 00:12:43.682 04:09:57 -- accel/accel.sh@22 -- # case "$var" in 00:12:43.682 04:09:57 -- accel/accel.sh@20 -- # IFS=: 00:12:43.682 04:09:57 -- accel/accel.sh@20 -- # read -r var val 00:12:43.682 04:09:57 -- accel/accel.sh@21 -- # val= 00:12:43.682 04:09:57 -- accel/accel.sh@22 -- # case "$var" in 00:12:43.682 04:09:57 -- accel/accel.sh@20 -- # IFS=: 00:12:43.682 04:09:57 -- accel/accel.sh@20 -- # read -r var val 00:12:43.682 04:09:57 -- accel/accel.sh@21 -- # val= 00:12:43.682 04:09:57 -- accel/accel.sh@22 -- # case "$var" in 00:12:43.682 04:09:57 -- accel/accel.sh@20 -- # IFS=: 00:12:43.682 04:09:57 -- accel/accel.sh@20 -- # read -r var val 00:12:43.682 04:09:57 -- accel/accel.sh@28 -- # [[ -n iaa ]] 00:12:43.682 04:09:57 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:12:43.682 04:09:57 -- accel/accel.sh@28 -- # [[ iaa == \i\a\a ]] 00:12:43.682 00:12:43.682 real 0m19.357s 00:12:43.682 user 0m6.534s 00:12:43.682 sys 0m0.456s 00:12:43.682 04:09:57 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:43.682 04:09:57 -- common/autotest_common.sh@10 -- # set +x 00:12:43.682 ************************************ 00:12:43.682 END TEST accel_deomp_full_mthread 00:12:43.682 ************************************ 00:12:43.682 04:09:58 -- accel/accel.sh@116 -- # [[ n == y ]] 00:12:43.682 04:09:58 -- accel/accel.sh@129 -- # run_test accel_dif_functional_tests /var/jenkins/workspace/dsa-phy-autotest/spdk/test/accel/dif/dif -c /dev/fd/62 00:12:43.682 04:09:58 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:12:43.682 04:09:58 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:12:43.682 04:09:58 -- common/autotest_common.sh@10 -- # set +x 00:12:43.682 04:09:58 -- accel/accel.sh@129 -- # build_accel_config 00:12:43.682 04:09:58 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:12:43.682 04:09:58 -- accel/accel.sh@33 -- # [[ 1 -gt 0 ]] 00:12:43.682 04:09:58 -- accel/accel.sh@33 -- # accel_json_cfg+=('{"method": "dsa_scan_accel_module"}') 00:12:43.682 04:09:58 -- accel/accel.sh@34 -- # [[ 1 -gt 0 ]] 00:12:43.682 04:09:58 -- accel/accel.sh@34 -- # accel_json_cfg+=('{"method": "iaa_scan_accel_module"}') 00:12:43.682 04:09:58 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:12:43.682 04:09:58 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:12:43.682 04:09:58 -- accel/accel.sh@41 -- # local IFS=, 00:12:43.682 04:09:58 -- accel/accel.sh@42 -- # jq -r . 00:12:43.682 ************************************ 00:12:43.682 START TEST accel_dif_functional_tests 00:12:43.682 ************************************ 00:12:43.682 04:09:58 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/accel/dif/dif -c /dev/fd/62 00:12:43.682 [2024-05-14 04:09:58.071766] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:12:43.682 [2024-05-14 04:09:58.071850] [ DPDK EAL parameters: DIF --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3893548 ] 00:12:43.682 EAL: No free 2048 kB hugepages reported on node 1 00:12:43.682 [2024-05-14 04:09:58.159354] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:12:43.682 [2024-05-14 04:09:58.252065] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:43.682 [2024-05-14 04:09:58.252160] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:43.682 [2024-05-14 04:09:58.252167] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:12:43.682 [2024-05-14 04:09:58.256703] accel_dsa_rpc.c: 50:rpc_dsa_scan_accel_module: *NOTICE*: Enabled DSA user-mode 00:12:43.682 [2024-05-14 04:09:58.264696] accel_iaa_rpc.c: 33:rpc_iaa_scan_accel_module: *NOTICE*: Enabled IAA user-mode 00:12:51.818 00:12:51.818 00:12:51.818 CUnit - A unit testing framework for C - Version 2.1-3 00:12:51.818 http://cunit.sourceforge.net/ 00:12:51.818 00:12:51.818 00:12:51.818 Suite: accel_dif 00:12:51.818 Test: verify: DIF generated, GUARD check ...passed 00:12:51.818 Test: verify: DIF generated, APPTAG check ...passed 00:12:51.818 Test: verify: DIF generated, REFTAG check ...passed 00:12:51.818 Test: verify: DIF not generated, GUARD check ...[2024-05-14 04:10:06.176602] idxd.c:1812:spdk_idxd_process_events: *ERROR*: Completion status 0x9 00:12:51.818 [2024-05-14 04:10:06.176648] idxd_user.c: 428:user_idxd_dump_sw_err: *NOTICE*: SW Error Raw:[2024-05-14 04:10:06.176662] idxd_user.c: 431:user_idxd_dump_sw_err: *NOTICE*: 0x0 00:12:51.818 [2024-05-14 04:10:06.176674] idxd_user.c: 431:user_idxd_dump_sw_err: *NOTICE*: 0x0 00:12:51.818 [2024-05-14 04:10:06.176682] idxd_user.c: 431:user_idxd_dump_sw_err: *NOTICE*: 0x0 00:12:51.818 [2024-05-14 04:10:06.176691] idxd_user.c: 431:user_idxd_dump_sw_err: *NOTICE*: 0x0 00:12:51.818 [2024-05-14 04:10:06.176699] idxd_user.c: 434:user_idxd_dump_sw_err: *NOTICE*: SW Error error code: 0 00:12:51.818 [2024-05-14 04:10:06.176711] idxd_user.c: 435:user_idxd_dump_sw_err: *NOTICE*: SW Error WQ index: 0 00:12:51.818 [2024-05-14 04:10:06.176720] idxd_user.c: 436:user_idxd_dump_sw_err: *NOTICE*: SW Error Operation: 0 00:12:51.818 [2024-05-14 04:10:06.176748] dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:12:51.818 [2024-05-14 04:10:06.176759] accel_dsa.c: 127:dsa_done: *ERROR*: DIF error detected. type=4, offset=0 00:12:51.818 [2024-05-14 04:10:06.176785] dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:12:51.818 passed 00:12:51.818 Test: verify: DIF not generated, APPTAG check ...[2024-05-14 04:10:06.176839] idxd.c:1812:spdk_idxd_process_events: *ERROR*: Completion status 0x9 00:12:51.818 [2024-05-14 04:10:06.176852] idxd_user.c: 428:user_idxd_dump_sw_err: *NOTICE*: SW Error Raw:[2024-05-14 04:10:06.176865] idxd_user.c: 431:user_idxd_dump_sw_err: *NOTICE*: 0x0 00:12:51.818 [2024-05-14 04:10:06.176875] idxd_user.c: 431:user_idxd_dump_sw_err: *NOTICE*: 0x0 00:12:51.818 [2024-05-14 04:10:06.176885] idxd_user.c: 431:user_idxd_dump_sw_err: *NOTICE*: 0x0 00:12:51.818 [2024-05-14 04:10:06.176895] idxd_user.c: 431:user_idxd_dump_sw_err: *NOTICE*: 0x0 00:12:51.818 [2024-05-14 04:10:06.176906] idxd_user.c: 434:user_idxd_dump_sw_err: *NOTICE*: SW Error error code: 0 00:12:51.818 [2024-05-14 04:10:06.176914] idxd_user.c: 435:user_idxd_dump_sw_err: *NOTICE*: SW Error WQ index: 0 00:12:51.818 [2024-05-14 04:10:06.176925] idxd_user.c: 436:user_idxd_dump_sw_err: *NOTICE*: SW Error Operation: 0 00:12:51.818 [2024-05-14 04:10:06.176936] dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:12:51.818 [2024-05-14 04:10:06.176949] accel_dsa.c: 127:dsa_done: *ERROR*: DIF error detected. type=2, offset=0 00:12:51.818 [2024-05-14 04:10:06.176968] dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:12:51.818 passed 00:12:51.818 Test: verify: DIF not generated, REFTAG check ...[2024-05-14 04:10:06.177000] idxd.c:1812:spdk_idxd_process_events: *ERROR*: Completion status 0x9 00:12:51.818 [2024-05-14 04:10:06.177015] idxd_user.c: 428:user_idxd_dump_sw_err: *NOTICE*: SW Error Raw:[2024-05-14 04:10:06.177024] idxd_user.c: 431:user_idxd_dump_sw_err: *NOTICE*: 0x0 00:12:51.818 [2024-05-14 04:10:06.177035] idxd_user.c: 431:user_idxd_dump_sw_err: *NOTICE*: 0x0 00:12:51.818 [2024-05-14 04:10:06.177045] idxd_user.c: 431:user_idxd_dump_sw_err: *NOTICE*: 0x0 00:12:51.818 [2024-05-14 04:10:06.177056] idxd_user.c: 431:user_idxd_dump_sw_err: *NOTICE*: 0x0 00:12:51.818 [2024-05-14 04:10:06.177065] idxd_user.c: 434:user_idxd_dump_sw_err: *NOTICE*: SW Error error code: 0 00:12:51.818 [2024-05-14 04:10:06.177083] idxd_user.c: 435:user_idxd_dump_sw_err: *NOTICE*: SW Error WQ index: 0 00:12:51.818 [2024-05-14 04:10:06.177096] idxd_user.c: 436:user_idxd_dump_sw_err: *NOTICE*: SW Error Operation: 0 00:12:51.818 [2024-05-14 04:10:06.177111] dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:12:51.818 [2024-05-14 04:10:06.177122] accel_dsa.c: 127:dsa_done: *ERROR*: DIF error detected. type=1, offset=0 00:12:51.818 [2024-05-14 04:10:06.177143] dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:12:51.818 passed 00:12:51.818 Test: verify: APPTAG correct, APPTAG check ...passed 00:12:51.818 Test: verify: APPTAG incorrect, APPTAG check ...[2024-05-14 04:10:06.177224] idxd.c:1812:spdk_idxd_process_events: *ERROR*: Completion status 0x9 00:12:51.818 [2024-05-14 04:10:06.177236] idxd_user.c: 428:user_idxd_dump_sw_err: *NOTICE*: SW Error Raw:[2024-05-14 04:10:06.177247] idxd_user.c: 431:user_idxd_dump_sw_err: *NOTICE*: 0x0 00:12:51.818 [2024-05-14 04:10:06.177257] idxd_user.c: 431:user_idxd_dump_sw_err: *NOTICE*: 0x0 00:12:51.818 [2024-05-14 04:10:06.177268] idxd_user.c: 431:user_idxd_dump_sw_err: *NOTICE*: 0x0 00:12:51.818 [2024-05-14 04:10:06.177278] idxd_user.c: 431:user_idxd_dump_sw_err: *NOTICE*: 0x0 00:12:51.818 [2024-05-14 04:10:06.177289] idxd_user.c: 434:user_idxd_dump_sw_err: *NOTICE*: SW Error error code: 0 00:12:51.818 [2024-05-14 04:10:06.177298] idxd_user.c: 435:user_idxd_dump_sw_err: *NOTICE*: SW Error WQ index: 0 00:12:51.818 [2024-05-14 04:10:06.177311] idxd_user.c: 436:user_idxd_dump_sw_err: *NOTICE*: SW Error Operation: 0 00:12:51.818 [2024-05-14 04:10:06.177324] dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=30, Expected=28, Actual=14 00:12:51.818 [2024-05-14 04:10:06.177335] accel_dsa.c: 127:dsa_done: *ERROR*: DIF error detected. type=2, offset=0 00:12:51.818 passed 00:12:51.818 Test: verify: APPTAG incorrect, no APPTAG check ...passed 00:12:51.818 Test: verify: REFTAG incorrect, REFTAG ignore ...passed 00:12:51.818 Test: verify: REFTAG_INIT correct, REFTAG check ...passed 00:12:51.818 Test: verify: REFTAG_INIT incorrect, REFTAG check ...[2024-05-14 04:10:06.177490] idxd.c:1812:spdk_idxd_process_events: *ERROR*: Completion status 0x9 00:12:51.818 [2024-05-14 04:10:06.177504] idxd_user.c: 428:user_idxd_dump_sw_err: *NOTICE*: SW Error Raw:[2024-05-14 04:10:06.177514] idxd_user.c: 431:user_idxd_dump_sw_err: *NOTICE*: 0x0 00:12:51.818 [2024-05-14 04:10:06.177525] idxd_user.c: 431:user_idxd_dump_sw_err: *NOTICE*: 0x0 00:12:51.818 [2024-05-14 04:10:06.177536] idxd_user.c: 431:user_idxd_dump_sw_err: *NOTICE*: 0x0 00:12:51.818 [2024-05-14 04:10:06.177547] idxd_user.c: 431:user_idxd_dump_sw_err: *NOTICE*: 0x0 00:12:51.818 [2024-05-14 04:10:06.177562] idxd_user.c: 434:user_idxd_dump_sw_err: *NOTICE*: SW Error error code: 0 00:12:51.818 [2024-05-14 04:10:06.177573] idxd_user.c: 435:user_idxd_dump_sw_err: *NOTICE*: SW Error WQ index: 0 00:12:51.818 [2024-05-14 04:10:06.177582] idxd_user.c: 436:user_idxd_dump_sw_err: *NOTICE*: SW Error Operation: 0 00:12:51.818 [2024-05-14 04:10:06.177593] idxd.c:1812:spdk_idxd_process_events: *ERROR*: Completion status 0x9 00:12:51.818 [2024-05-14 04:10:06.177602] idxd_user.c: 428:user_idxd_dump_sw_err: *NOTICE*: SW Error Raw:[2024-05-14 04:10:06.177613] idxd_user.c: 431:user_idxd_dump_sw_err: *NOTICE*: 0x0 00:12:51.819 [2024-05-14 04:10:06.177622] idxd_user.c: 431:user_idxd_dump_sw_err: *NOTICE*: 0x0 00:12:51.819 [2024-05-14 04:10:06.177633] idxd_user.c: 431:user_idxd_dump_sw_err: *NOTICE*: 0x0 00:12:51.819 [2024-05-14 04:10:06.177642] idxd_user.c: 431:user_idxd_dump_sw_err: *NOTICE*: 0x0 00:12:51.819 [2024-05-14 04:10:06.177652] idxd_user.c: 434:user_idxd_dump_sw_err: *NOTICE*: SW Error error code: 0 00:12:51.819 [2024-05-14 04:10:06.177661] idxd_user.c: 435:user_idxd_dump_sw_err: *NOTICE*: SW Error WQ index: 0 00:12:51.819 [2024-05-14 04:10:06.177674] idxd_user.c: 436:user_idxd_dump_sw_err: *NOTICE*: SW Error Operation: 0 00:12:51.819 [2024-05-14 04:10:06.177686] dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=10 00:12:51.819 [2024-05-14 04:10:06.177699] accel_dsa.c: 127:dsa_done: *ERROR*: DIF error detected. type=1, offset=0 00:12:51.819 [2024-05-14 04:10:06.177712] idxd.c:1812:spdk_idxd_process_events: *ERROR*: Completion status 0x5 00:12:51.819 passed 00:12:51.819 Test: generate copy: DIF generated, GUARD check ...[2024-05-14 04:10:06.177725] idxd_user.c: 428:user_idxd_dump_sw_err: *NOTICE*: SW Error Raw:[2024-05-14 04:10:06.177734] idxd_user.c: 431:user_idxd_dump_sw_err: *NOTICE*: 0x0 00:12:51.819 [2024-05-14 04:10:06.177747] idxd_user.c: 431:user_idxd_dump_sw_err: *NOTICE*: 0x0 00:12:51.819 [2024-05-14 04:10:06.177756] idxd_user.c: 431:user_idxd_dump_sw_err: *NOTICE*: 0x0 00:12:51.819 [2024-05-14 04:10:06.177767] idxd_user.c: 431:user_idxd_dump_sw_err: *NOTICE*: 0x0 00:12:51.819 [2024-05-14 04:10:06.177775] idxd_user.c: 434:user_idxd_dump_sw_err: *NOTICE*: SW Error error code: 0 00:12:51.819 [2024-05-14 04:10:06.177786] idxd_user.c: 435:user_idxd_dump_sw_err: *NOTICE*: SW Error WQ index: 0 00:12:51.819 [2024-05-14 04:10:06.177795] idxd_user.c: 436:user_idxd_dump_sw_err: *NOTICE*: SW Error Operation: 0 00:12:51.819 passed 00:12:51.819 Test: generate copy: DIF generated, APTTAG check ...passed 00:12:51.819 Test: generate copy: DIF generated, REFTAG check ...passed 00:12:51.819 Test: generate copy: DIF generated, no GUARD check flag set ...[2024-05-14 04:10:06.177958] idxd.c:1571:idxd_validate_dif_insert_params: *ERROR*: Guard check flag must be set. 00:12:51.819 passed 00:12:51.819 Test: generate copy: DIF generated, no APPTAG check flag set ...[2024-05-14 04:10:06.177997] idxd.c:1576:idxd_validate_dif_insert_params: *ERROR*: Application Tag check flag must be set. 00:12:51.819 passed 00:12:51.819 Test: generate copy: DIF generated, no REFTAG check flag set ...[2024-05-14 04:10:06.178034] idxd.c:1581:idxd_validate_dif_insert_params: *ERROR*: Reference Tag check flag must be set. 00:12:51.819 passed 00:12:51.819 Test: generate copy: iovecs-len validate ...[2024-05-14 04:10:06.178071] idxd.c:1608:idxd_validate_dif_insert_iovecs: *ERROR*: Invalid length of data in src (4096) and dst (4176) in iovecs[0]. 00:12:51.819 passed 00:12:51.819 Test: generate copy: buffer alignment validate ...passed 00:12:51.819 00:12:51.819 Run Summary: Type Total Ran Passed Failed Inactive 00:12:51.819 suites 1 1 n/a 0 0 00:12:51.819 tests 20 20 20 0 0 00:12:51.819 asserts 204 204 204 0 n/a 00:12:51.819 00:12:51.819 Elapsed time = 0.003 seconds 00:12:55.190 00:12:55.190 real 0m11.006s 00:12:55.190 user 0m22.106s 00:12:55.190 sys 0m0.230s 00:12:55.190 04:10:09 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:55.190 04:10:09 -- common/autotest_common.sh@10 -- # set +x 00:12:55.190 ************************************ 00:12:55.190 END TEST accel_dif_functional_tests 00:12:55.190 ************************************ 00:12:55.190 00:12:55.190 real 7m8.527s 00:12:55.190 user 4m33.920s 00:12:55.190 sys 0m11.649s 00:12:55.190 04:10:09 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:55.190 04:10:09 -- common/autotest_common.sh@10 -- # set +x 00:12:55.190 ************************************ 00:12:55.190 END TEST accel 00:12:55.190 ************************************ 00:12:55.190 04:10:09 -- spdk/autotest.sh@190 -- # run_test accel_rpc /var/jenkins/workspace/dsa-phy-autotest/spdk/test/accel/accel_rpc.sh 00:12:55.190 04:10:09 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:12:55.190 04:10:09 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:12:55.190 04:10:09 -- common/autotest_common.sh@10 -- # set +x 00:12:55.190 ************************************ 00:12:55.190 START TEST accel_rpc 00:12:55.190 ************************************ 00:12:55.190 04:10:09 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/accel/accel_rpc.sh 00:12:55.190 * Looking for test storage... 00:12:55.190 * Found test storage at /var/jenkins/workspace/dsa-phy-autotest/spdk/test/accel 00:12:55.190 04:10:09 -- accel/accel_rpc.sh@11 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:12:55.190 04:10:09 -- accel/accel_rpc.sh@14 -- # spdk_tgt_pid=3895722 00:12:55.190 04:10:09 -- accel/accel_rpc.sh@15 -- # waitforlisten 3895722 00:12:55.190 04:10:09 -- common/autotest_common.sh@819 -- # '[' -z 3895722 ']' 00:12:55.190 04:10:09 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:55.190 04:10:09 -- common/autotest_common.sh@824 -- # local max_retries=100 00:12:55.190 04:10:09 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:55.190 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:55.190 04:10:09 -- common/autotest_common.sh@828 -- # xtrace_disable 00:12:55.190 04:10:09 -- common/autotest_common.sh@10 -- # set +x 00:12:55.190 04:10:09 -- accel/accel_rpc.sh@13 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/spdk_tgt --wait-for-rpc 00:12:55.190 [2024-05-14 04:10:09.254372] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:12:55.190 [2024-05-14 04:10:09.254499] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3895722 ] 00:12:55.190 EAL: No free 2048 kB hugepages reported on node 1 00:12:55.190 [2024-05-14 04:10:09.372702] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:55.190 [2024-05-14 04:10:09.466641] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:12:55.190 [2024-05-14 04:10:09.466825] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:55.448 04:10:09 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:12:55.448 04:10:09 -- common/autotest_common.sh@852 -- # return 0 00:12:55.448 04:10:09 -- accel/accel_rpc.sh@45 -- # [[ y == y ]] 00:12:55.448 04:10:09 -- accel/accel_rpc.sh@45 -- # [[ 1 -gt 0 ]] 00:12:55.448 04:10:09 -- accel/accel_rpc.sh@46 -- # run_test accel_scan_dsa_modules accel_scan_dsa_modules_test_suite 00:12:55.448 04:10:09 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:12:55.448 04:10:09 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:12:55.448 04:10:09 -- common/autotest_common.sh@10 -- # set +x 00:12:55.448 ************************************ 00:12:55.448 START TEST accel_scan_dsa_modules 00:12:55.448 ************************************ 00:12:55.448 04:10:09 -- common/autotest_common.sh@1104 -- # accel_scan_dsa_modules_test_suite 00:12:55.448 04:10:09 -- accel/accel_rpc.sh@21 -- # rpc_cmd dsa_scan_accel_module 00:12:55.448 04:10:09 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:55.448 04:10:09 -- common/autotest_common.sh@10 -- # set +x 00:12:55.448 [2024-05-14 04:10:09.995305] accel_dsa_rpc.c: 50:rpc_dsa_scan_accel_module: *NOTICE*: Enabled DSA user-mode 00:12:55.448 04:10:09 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:55.448 04:10:09 -- accel/accel_rpc.sh@22 -- # NOT rpc_cmd dsa_scan_accel_module 00:12:55.448 04:10:09 -- common/autotest_common.sh@640 -- # local es=0 00:12:55.448 04:10:09 -- common/autotest_common.sh@642 -- # valid_exec_arg rpc_cmd dsa_scan_accel_module 00:12:55.448 04:10:09 -- common/autotest_common.sh@628 -- # local arg=rpc_cmd 00:12:55.448 04:10:09 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:12:55.448 04:10:09 -- common/autotest_common.sh@632 -- # type -t rpc_cmd 00:12:55.448 04:10:09 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:12:55.448 04:10:09 -- common/autotest_common.sh@643 -- # rpc_cmd dsa_scan_accel_module 00:12:55.448 04:10:09 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:55.448 04:10:10 -- common/autotest_common.sh@10 -- # set +x 00:12:55.448 request: 00:12:55.448 { 00:12:55.448 "method": "dsa_scan_accel_module", 00:12:55.448 "req_id": 1 00:12:55.448 } 00:12:55.448 Got JSON-RPC error response 00:12:55.448 response: 00:12:55.448 { 00:12:55.448 "code": -114, 00:12:55.448 "message": "Operation already in progress" 00:12:55.448 } 00:12:55.448 04:10:10 -- common/autotest_common.sh@579 -- # [[ 1 == 0 ]] 00:12:55.448 04:10:10 -- common/autotest_common.sh@643 -- # es=1 00:12:55.448 04:10:10 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:12:55.448 04:10:10 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:12:55.448 04:10:10 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:12:55.448 00:12:55.448 real 0m0.015s 00:12:55.448 user 0m0.002s 00:12:55.448 sys 0m0.002s 00:12:55.448 04:10:10 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:55.448 04:10:10 -- common/autotest_common.sh@10 -- # set +x 00:12:55.448 ************************************ 00:12:55.448 END TEST accel_scan_dsa_modules 00:12:55.448 ************************************ 00:12:55.448 04:10:10 -- accel/accel_rpc.sh@49 -- # [[ y == y ]] 00:12:55.448 04:10:10 -- accel/accel_rpc.sh@49 -- # [[ 1 -gt 0 ]] 00:12:55.448 04:10:10 -- accel/accel_rpc.sh@50 -- # run_test accel_scan_iaa_modules accel_scan_iaa_modules_test_suite 00:12:55.448 04:10:10 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:12:55.448 04:10:10 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:12:55.448 04:10:10 -- common/autotest_common.sh@10 -- # set +x 00:12:55.707 ************************************ 00:12:55.707 START TEST accel_scan_iaa_modules 00:12:55.707 ************************************ 00:12:55.707 04:10:10 -- common/autotest_common.sh@1104 -- # accel_scan_iaa_modules_test_suite 00:12:55.707 04:10:10 -- accel/accel_rpc.sh@29 -- # rpc_cmd iaa_scan_accel_module 00:12:55.707 04:10:10 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:55.707 04:10:10 -- common/autotest_common.sh@10 -- # set +x 00:12:55.707 [2024-05-14 04:10:10.039314] accel_iaa_rpc.c: 33:rpc_iaa_scan_accel_module: *NOTICE*: Enabled IAA user-mode 00:12:55.707 04:10:10 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:55.707 04:10:10 -- accel/accel_rpc.sh@30 -- # NOT rpc_cmd iaa_scan_accel_module 00:12:55.707 04:10:10 -- common/autotest_common.sh@640 -- # local es=0 00:12:55.707 04:10:10 -- common/autotest_common.sh@642 -- # valid_exec_arg rpc_cmd iaa_scan_accel_module 00:12:55.707 04:10:10 -- common/autotest_common.sh@628 -- # local arg=rpc_cmd 00:12:55.707 04:10:10 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:12:55.707 04:10:10 -- common/autotest_common.sh@632 -- # type -t rpc_cmd 00:12:55.707 04:10:10 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:12:55.707 04:10:10 -- common/autotest_common.sh@643 -- # rpc_cmd iaa_scan_accel_module 00:12:55.707 04:10:10 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:55.707 04:10:10 -- common/autotest_common.sh@10 -- # set +x 00:12:55.707 request: 00:12:55.707 { 00:12:55.707 "method": "iaa_scan_accel_module", 00:12:55.707 "req_id": 1 00:12:55.707 } 00:12:55.707 Got JSON-RPC error response 00:12:55.707 response: 00:12:55.707 { 00:12:55.707 "code": -114, 00:12:55.707 "message": "Operation already in progress" 00:12:55.707 } 00:12:55.707 04:10:10 -- common/autotest_common.sh@579 -- # [[ 1 == 0 ]] 00:12:55.707 04:10:10 -- common/autotest_common.sh@643 -- # es=1 00:12:55.707 04:10:10 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:12:55.707 04:10:10 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:12:55.707 04:10:10 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:12:55.707 00:12:55.707 real 0m0.015s 00:12:55.707 user 0m0.002s 00:12:55.707 sys 0m0.001s 00:12:55.707 04:10:10 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:55.707 04:10:10 -- common/autotest_common.sh@10 -- # set +x 00:12:55.707 ************************************ 00:12:55.707 END TEST accel_scan_iaa_modules 00:12:55.707 ************************************ 00:12:55.707 04:10:10 -- accel/accel_rpc.sh@53 -- # run_test accel_assign_opcode accel_assign_opcode_test_suite 00:12:55.707 04:10:10 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:12:55.707 04:10:10 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:12:55.707 04:10:10 -- common/autotest_common.sh@10 -- # set +x 00:12:55.707 ************************************ 00:12:55.707 START TEST accel_assign_opcode 00:12:55.707 ************************************ 00:12:55.707 04:10:10 -- common/autotest_common.sh@1104 -- # accel_assign_opcode_test_suite 00:12:55.707 04:10:10 -- accel/accel_rpc.sh@38 -- # rpc_cmd accel_assign_opc -o copy -m incorrect 00:12:55.707 04:10:10 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:55.707 04:10:10 -- common/autotest_common.sh@10 -- # set +x 00:12:55.707 [2024-05-14 04:10:10.083350] accel_rpc.c: 168:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module incorrect 00:12:55.707 04:10:10 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:55.707 04:10:10 -- accel/accel_rpc.sh@40 -- # rpc_cmd accel_assign_opc -o copy -m software 00:12:55.707 04:10:10 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:55.707 04:10:10 -- common/autotest_common.sh@10 -- # set +x 00:12:55.707 [2024-05-14 04:10:10.091327] accel_rpc.c: 168:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module software 00:12:55.707 04:10:10 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:55.707 04:10:10 -- accel/accel_rpc.sh@41 -- # rpc_cmd framework_start_init 00:12:55.707 04:10:10 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:55.707 04:10:10 -- common/autotest_common.sh@10 -- # set +x 00:13:03.914 04:10:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:03.914 04:10:18 -- accel/accel_rpc.sh@42 -- # rpc_cmd accel_get_opc_assignments 00:13:03.914 04:10:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:03.914 04:10:18 -- common/autotest_common.sh@10 -- # set +x 00:13:03.915 04:10:18 -- accel/accel_rpc.sh@42 -- # jq -r .copy 00:13:03.915 04:10:18 -- accel/accel_rpc.sh@42 -- # grep software 00:13:03.915 04:10:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:03.915 software 00:13:03.915 00:13:03.915 real 0m8.159s 00:13:03.915 user 0m0.031s 00:13:03.915 sys 0m0.008s 00:13:03.915 04:10:18 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:03.915 04:10:18 -- common/autotest_common.sh@10 -- # set +x 00:13:03.915 ************************************ 00:13:03.915 END TEST accel_assign_opcode 00:13:03.915 ************************************ 00:13:03.915 04:10:18 -- accel/accel_rpc.sh@55 -- # killprocess 3895722 00:13:03.915 04:10:18 -- common/autotest_common.sh@926 -- # '[' -z 3895722 ']' 00:13:03.915 04:10:18 -- common/autotest_common.sh@930 -- # kill -0 3895722 00:13:03.915 04:10:18 -- common/autotest_common.sh@931 -- # uname 00:13:03.915 04:10:18 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:13:03.915 04:10:18 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3895722 00:13:03.915 04:10:18 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:13:03.915 04:10:18 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:13:03.915 04:10:18 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3895722' 00:13:03.915 killing process with pid 3895722 00:13:03.915 04:10:18 -- common/autotest_common.sh@945 -- # kill 3895722 00:13:03.915 04:10:18 -- common/autotest_common.sh@950 -- # wait 3895722 00:13:07.217 00:13:07.217 real 0m12.473s 00:13:07.217 user 0m4.002s 00:13:07.217 sys 0m0.588s 00:13:07.217 04:10:21 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:07.217 04:10:21 -- common/autotest_common.sh@10 -- # set +x 00:13:07.217 ************************************ 00:13:07.217 END TEST accel_rpc 00:13:07.217 ************************************ 00:13:07.217 04:10:21 -- spdk/autotest.sh@191 -- # run_test app_cmdline /var/jenkins/workspace/dsa-phy-autotest/spdk/test/app/cmdline.sh 00:13:07.217 04:10:21 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:13:07.217 04:10:21 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:13:07.217 04:10:21 -- common/autotest_common.sh@10 -- # set +x 00:13:07.217 ************************************ 00:13:07.217 START TEST app_cmdline 00:13:07.217 ************************************ 00:13:07.217 04:10:21 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/app/cmdline.sh 00:13:07.217 * Looking for test storage... 00:13:07.217 * Found test storage at /var/jenkins/workspace/dsa-phy-autotest/spdk/test/app 00:13:07.217 04:10:21 -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:13:07.217 04:10:21 -- app/cmdline.sh@17 -- # spdk_tgt_pid=3898246 00:13:07.217 04:10:21 -- app/cmdline.sh@18 -- # waitforlisten 3898246 00:13:07.217 04:10:21 -- common/autotest_common.sh@819 -- # '[' -z 3898246 ']' 00:13:07.217 04:10:21 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:07.217 04:10:21 -- common/autotest_common.sh@824 -- # local max_retries=100 00:13:07.217 04:10:21 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:07.217 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:07.217 04:10:21 -- common/autotest_common.sh@828 -- # xtrace_disable 00:13:07.217 04:10:21 -- common/autotest_common.sh@10 -- # set +x 00:13:07.217 04:10:21 -- app/cmdline.sh@16 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:13:07.217 [2024-05-14 04:10:21.776191] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:13:07.217 [2024-05-14 04:10:21.776321] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3898246 ] 00:13:07.478 EAL: No free 2048 kB hugepages reported on node 1 00:13:07.478 [2024-05-14 04:10:21.895945] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:07.478 [2024-05-14 04:10:21.989461] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:13:07.478 [2024-05-14 04:10:21.989643] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:08.045 04:10:22 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:13:08.045 04:10:22 -- common/autotest_common.sh@852 -- # return 0 00:13:08.045 04:10:22 -- app/cmdline.sh@20 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py spdk_get_version 00:13:08.045 { 00:13:08.045 "version": "SPDK v24.01.1-pre git sha1 36faa8c31", 00:13:08.045 "fields": { 00:13:08.045 "major": 24, 00:13:08.045 "minor": 1, 00:13:08.045 "patch": 1, 00:13:08.045 "suffix": "-pre", 00:13:08.045 "commit": "36faa8c31" 00:13:08.045 } 00:13:08.045 } 00:13:08.045 04:10:22 -- app/cmdline.sh@22 -- # expected_methods=() 00:13:08.045 04:10:22 -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:13:08.045 04:10:22 -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:13:08.045 04:10:22 -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:13:08.045 04:10:22 -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:13:08.045 04:10:22 -- app/cmdline.sh@26 -- # jq -r '.[]' 00:13:08.045 04:10:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:08.045 04:10:22 -- app/cmdline.sh@26 -- # sort 00:13:08.045 04:10:22 -- common/autotest_common.sh@10 -- # set +x 00:13:08.045 04:10:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:08.045 04:10:22 -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:13:08.045 04:10:22 -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:13:08.045 04:10:22 -- app/cmdline.sh@30 -- # NOT /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:13:08.045 04:10:22 -- common/autotest_common.sh@640 -- # local es=0 00:13:08.045 04:10:22 -- common/autotest_common.sh@642 -- # valid_exec_arg /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:13:08.045 04:10:22 -- common/autotest_common.sh@628 -- # local arg=/var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py 00:13:08.045 04:10:22 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:13:08.045 04:10:22 -- common/autotest_common.sh@632 -- # type -t /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py 00:13:08.045 04:10:22 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:13:08.045 04:10:22 -- common/autotest_common.sh@634 -- # type -P /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py 00:13:08.045 04:10:22 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:13:08.045 04:10:22 -- common/autotest_common.sh@634 -- # arg=/var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py 00:13:08.045 04:10:22 -- common/autotest_common.sh@634 -- # [[ -x /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py ]] 00:13:08.045 04:10:22 -- common/autotest_common.sh@643 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:13:08.305 request: 00:13:08.305 { 00:13:08.305 "method": "env_dpdk_get_mem_stats", 00:13:08.305 "req_id": 1 00:13:08.305 } 00:13:08.305 Got JSON-RPC error response 00:13:08.305 response: 00:13:08.305 { 00:13:08.305 "code": -32601, 00:13:08.305 "message": "Method not found" 00:13:08.305 } 00:13:08.305 04:10:22 -- common/autotest_common.sh@643 -- # es=1 00:13:08.305 04:10:22 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:13:08.305 04:10:22 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:13:08.305 04:10:22 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:13:08.305 04:10:22 -- app/cmdline.sh@1 -- # killprocess 3898246 00:13:08.305 04:10:22 -- common/autotest_common.sh@926 -- # '[' -z 3898246 ']' 00:13:08.305 04:10:22 -- common/autotest_common.sh@930 -- # kill -0 3898246 00:13:08.305 04:10:22 -- common/autotest_common.sh@931 -- # uname 00:13:08.305 04:10:22 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:13:08.305 04:10:22 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3898246 00:13:08.305 04:10:22 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:13:08.305 04:10:22 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:13:08.305 04:10:22 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3898246' 00:13:08.305 killing process with pid 3898246 00:13:08.305 04:10:22 -- common/autotest_common.sh@945 -- # kill 3898246 00:13:08.305 04:10:22 -- common/autotest_common.sh@950 -- # wait 3898246 00:13:09.244 00:13:09.244 real 0m2.029s 00:13:09.244 user 0m2.138s 00:13:09.244 sys 0m0.461s 00:13:09.244 04:10:23 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:09.244 04:10:23 -- common/autotest_common.sh@10 -- # set +x 00:13:09.244 ************************************ 00:13:09.244 END TEST app_cmdline 00:13:09.244 ************************************ 00:13:09.244 04:10:23 -- spdk/autotest.sh@192 -- # run_test version /var/jenkins/workspace/dsa-phy-autotest/spdk/test/app/version.sh 00:13:09.244 04:10:23 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:13:09.244 04:10:23 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:13:09.244 04:10:23 -- common/autotest_common.sh@10 -- # set +x 00:13:09.244 ************************************ 00:13:09.244 START TEST version 00:13:09.244 ************************************ 00:13:09.244 04:10:23 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/app/version.sh 00:13:09.244 * Looking for test storage... 00:13:09.244 * Found test storage at /var/jenkins/workspace/dsa-phy-autotest/spdk/test/app 00:13:09.244 04:10:23 -- app/version.sh@17 -- # get_header_version major 00:13:09.244 04:10:23 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /var/jenkins/workspace/dsa-phy-autotest/spdk/include/spdk/version.h 00:13:09.244 04:10:23 -- app/version.sh@14 -- # cut -f2 00:13:09.244 04:10:23 -- app/version.sh@14 -- # tr -d '"' 00:13:09.244 04:10:23 -- app/version.sh@17 -- # major=24 00:13:09.244 04:10:23 -- app/version.sh@18 -- # get_header_version minor 00:13:09.244 04:10:23 -- app/version.sh@14 -- # tr -d '"' 00:13:09.244 04:10:23 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /var/jenkins/workspace/dsa-phy-autotest/spdk/include/spdk/version.h 00:13:09.244 04:10:23 -- app/version.sh@14 -- # cut -f2 00:13:09.244 04:10:23 -- app/version.sh@18 -- # minor=1 00:13:09.244 04:10:23 -- app/version.sh@19 -- # get_header_version patch 00:13:09.244 04:10:23 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /var/jenkins/workspace/dsa-phy-autotest/spdk/include/spdk/version.h 00:13:09.244 04:10:23 -- app/version.sh@14 -- # tr -d '"' 00:13:09.244 04:10:23 -- app/version.sh@14 -- # cut -f2 00:13:09.244 04:10:23 -- app/version.sh@19 -- # patch=1 00:13:09.244 04:10:23 -- app/version.sh@20 -- # get_header_version suffix 00:13:09.244 04:10:23 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /var/jenkins/workspace/dsa-phy-autotest/spdk/include/spdk/version.h 00:13:09.244 04:10:23 -- app/version.sh@14 -- # cut -f2 00:13:09.244 04:10:23 -- app/version.sh@14 -- # tr -d '"' 00:13:09.244 04:10:23 -- app/version.sh@20 -- # suffix=-pre 00:13:09.244 04:10:23 -- app/version.sh@22 -- # version=24.1 00:13:09.244 04:10:23 -- app/version.sh@25 -- # (( patch != 0 )) 00:13:09.244 04:10:23 -- app/version.sh@25 -- # version=24.1.1 00:13:09.244 04:10:23 -- app/version.sh@28 -- # version=24.1.1rc0 00:13:09.244 04:10:23 -- app/version.sh@30 -- # PYTHONPATH=:/var/jenkins/workspace/dsa-phy-autotest/spdk/python:/var/jenkins/workspace/dsa-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/dsa-phy-autotest/spdk/python:/var/jenkins/workspace/dsa-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/dsa-phy-autotest/spdk/python 00:13:09.244 04:10:23 -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:13:09.244 04:10:23 -- app/version.sh@30 -- # py_version=24.1.1rc0 00:13:09.244 04:10:23 -- app/version.sh@31 -- # [[ 24.1.1rc0 == \2\4\.\1\.\1\r\c\0 ]] 00:13:09.244 00:13:09.244 real 0m0.126s 00:13:09.244 user 0m0.070s 00:13:09.244 sys 0m0.087s 00:13:09.244 04:10:23 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:09.244 04:10:23 -- common/autotest_common.sh@10 -- # set +x 00:13:09.244 ************************************ 00:13:09.244 END TEST version 00:13:09.244 ************************************ 00:13:09.504 04:10:23 -- spdk/autotest.sh@194 -- # '[' 0 -eq 1 ']' 00:13:09.504 04:10:23 -- spdk/autotest.sh@204 -- # uname -s 00:13:09.504 04:10:23 -- spdk/autotest.sh@204 -- # [[ Linux == Linux ]] 00:13:09.504 04:10:23 -- spdk/autotest.sh@205 -- # [[ 0 -eq 1 ]] 00:13:09.504 04:10:23 -- spdk/autotest.sh@205 -- # [[ 0 -eq 1 ]] 00:13:09.504 04:10:23 -- spdk/autotest.sh@217 -- # '[' 0 -eq 1 ']' 00:13:09.504 04:10:23 -- spdk/autotest.sh@264 -- # '[' 0 -eq 1 ']' 00:13:09.504 04:10:23 -- spdk/autotest.sh@268 -- # timing_exit lib 00:13:09.504 04:10:23 -- common/autotest_common.sh@718 -- # xtrace_disable 00:13:09.504 04:10:23 -- common/autotest_common.sh@10 -- # set +x 00:13:09.504 04:10:23 -- spdk/autotest.sh@270 -- # '[' 0 -eq 1 ']' 00:13:09.504 04:10:23 -- spdk/autotest.sh@278 -- # '[' 0 -eq 1 ']' 00:13:09.504 04:10:23 -- spdk/autotest.sh@287 -- # '[' 1 -eq 1 ']' 00:13:09.504 04:10:23 -- spdk/autotest.sh@288 -- # export NET_TYPE 00:13:09.504 04:10:23 -- spdk/autotest.sh@291 -- # '[' tcp = rdma ']' 00:13:09.504 04:10:23 -- spdk/autotest.sh@294 -- # '[' tcp = tcp ']' 00:13:09.504 04:10:23 -- spdk/autotest.sh@295 -- # run_test nvmf_tcp /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:13:09.504 04:10:23 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:13:09.504 04:10:23 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:13:09.504 04:10:23 -- common/autotest_common.sh@10 -- # set +x 00:13:09.504 ************************************ 00:13:09.504 START TEST nvmf_tcp 00:13:09.504 ************************************ 00:13:09.504 04:10:23 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:13:09.504 * Looking for test storage... 00:13:09.504 * Found test storage at /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf 00:13:09.504 04:10:23 -- nvmf/nvmf.sh@10 -- # uname -s 00:13:09.504 04:10:23 -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:13:09.505 04:10:23 -- nvmf/nvmf.sh@14 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/common.sh 00:13:09.505 04:10:23 -- nvmf/common.sh@7 -- # uname -s 00:13:09.505 04:10:23 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:09.505 04:10:23 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:09.505 04:10:23 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:09.505 04:10:23 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:09.505 04:10:23 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:09.505 04:10:23 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:09.505 04:10:23 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:09.505 04:10:23 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:09.505 04:10:23 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:09.505 04:10:23 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:09.505 04:10:23 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80ef6226-405e-ee11-906e-a4bf01973fda 00:13:09.505 04:10:23 -- nvmf/common.sh@18 -- # NVME_HOSTID=80ef6226-405e-ee11-906e-a4bf01973fda 00:13:09.505 04:10:23 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:09.505 04:10:23 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:09.505 04:10:23 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:13:09.505 04:10:23 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/common.sh 00:13:09.505 04:10:23 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:09.505 04:10:23 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:09.505 04:10:23 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:09.505 04:10:23 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:09.505 04:10:23 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:09.505 04:10:23 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:09.505 04:10:23 -- paths/export.sh@5 -- # export PATH 00:13:09.505 04:10:23 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:09.505 04:10:23 -- nvmf/common.sh@46 -- # : 0 00:13:09.505 04:10:23 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:13:09.505 04:10:23 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:13:09.505 04:10:23 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:13:09.505 04:10:23 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:09.505 04:10:23 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:09.505 04:10:23 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:13:09.505 04:10:23 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:13:09.505 04:10:23 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:13:09.505 04:10:23 -- nvmf/nvmf.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:13:09.505 04:10:23 -- nvmf/nvmf.sh@18 -- # TEST_ARGS=("$@") 00:13:09.505 04:10:23 -- nvmf/nvmf.sh@20 -- # timing_enter target 00:13:09.505 04:10:23 -- common/autotest_common.sh@712 -- # xtrace_disable 00:13:09.505 04:10:23 -- common/autotest_common.sh@10 -- # set +x 00:13:09.505 04:10:23 -- nvmf/nvmf.sh@22 -- # [[ 0 -eq 0 ]] 00:13:09.505 04:10:23 -- nvmf/nvmf.sh@23 -- # run_test nvmf_example /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:13:09.505 04:10:23 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:13:09.505 04:10:23 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:13:09.505 04:10:23 -- common/autotest_common.sh@10 -- # set +x 00:13:09.505 ************************************ 00:13:09.505 START TEST nvmf_example 00:13:09.505 ************************************ 00:13:09.505 04:10:23 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:13:09.505 * Looking for test storage... 00:13:09.505 * Found test storage at /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target 00:13:09.505 04:10:24 -- target/nvmf_example.sh@9 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/common.sh 00:13:09.505 04:10:24 -- nvmf/common.sh@7 -- # uname -s 00:13:09.505 04:10:24 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:09.505 04:10:24 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:09.505 04:10:24 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:09.505 04:10:24 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:09.505 04:10:24 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:09.505 04:10:24 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:09.505 04:10:24 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:09.505 04:10:24 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:09.505 04:10:24 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:09.505 04:10:24 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:09.505 04:10:24 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80ef6226-405e-ee11-906e-a4bf01973fda 00:13:09.505 04:10:24 -- nvmf/common.sh@18 -- # NVME_HOSTID=80ef6226-405e-ee11-906e-a4bf01973fda 00:13:09.505 04:10:24 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:09.505 04:10:24 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:09.505 04:10:24 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:13:09.505 04:10:24 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/common.sh 00:13:09.505 04:10:24 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:09.505 04:10:24 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:09.505 04:10:24 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:09.505 04:10:24 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:09.505 04:10:24 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:09.505 04:10:24 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:09.505 04:10:24 -- paths/export.sh@5 -- # export PATH 00:13:09.505 04:10:24 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:09.505 04:10:24 -- nvmf/common.sh@46 -- # : 0 00:13:09.505 04:10:24 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:13:09.505 04:10:24 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:13:09.505 04:10:24 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:13:09.505 04:10:24 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:09.505 04:10:24 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:09.505 04:10:24 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:13:09.505 04:10:24 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:13:09.505 04:10:24 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:13:09.505 04:10:24 -- target/nvmf_example.sh@11 -- # NVMF_EXAMPLE=("$SPDK_EXAMPLE_DIR/nvmf") 00:13:09.505 04:10:24 -- target/nvmf_example.sh@13 -- # MALLOC_BDEV_SIZE=64 00:13:09.505 04:10:24 -- target/nvmf_example.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:13:09.505 04:10:24 -- target/nvmf_example.sh@24 -- # build_nvmf_example_args 00:13:09.505 04:10:24 -- target/nvmf_example.sh@17 -- # '[' 0 -eq 1 ']' 00:13:09.505 04:10:24 -- target/nvmf_example.sh@20 -- # NVMF_EXAMPLE+=(-i "$NVMF_APP_SHM_ID" -g 10000) 00:13:09.505 04:10:24 -- target/nvmf_example.sh@21 -- # NVMF_EXAMPLE+=("${NO_HUGE[@]}") 00:13:09.505 04:10:24 -- target/nvmf_example.sh@40 -- # timing_enter nvmf_example_test 00:13:09.505 04:10:24 -- common/autotest_common.sh@712 -- # xtrace_disable 00:13:09.505 04:10:24 -- common/autotest_common.sh@10 -- # set +x 00:13:09.505 04:10:24 -- target/nvmf_example.sh@41 -- # nvmftestinit 00:13:09.505 04:10:24 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:13:09.505 04:10:24 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:09.505 04:10:24 -- nvmf/common.sh@436 -- # prepare_net_devs 00:13:09.505 04:10:24 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:13:09.506 04:10:24 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:13:09.506 04:10:24 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:09.506 04:10:24 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:09.506 04:10:24 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:09.506 04:10:24 -- nvmf/common.sh@402 -- # [[ phy-fallback != virt ]] 00:13:09.506 04:10:24 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:13:09.506 04:10:24 -- nvmf/common.sh@284 -- # xtrace_disable 00:13:09.506 04:10:24 -- common/autotest_common.sh@10 -- # set +x 00:13:14.783 04:10:29 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:13:14.783 04:10:29 -- nvmf/common.sh@290 -- # pci_devs=() 00:13:14.783 04:10:29 -- nvmf/common.sh@290 -- # local -a pci_devs 00:13:14.783 04:10:29 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:13:14.783 04:10:29 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:13:14.783 04:10:29 -- nvmf/common.sh@292 -- # pci_drivers=() 00:13:14.783 04:10:29 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:13:14.783 04:10:29 -- nvmf/common.sh@294 -- # net_devs=() 00:13:14.783 04:10:29 -- nvmf/common.sh@294 -- # local -ga net_devs 00:13:14.783 04:10:29 -- nvmf/common.sh@295 -- # e810=() 00:13:14.783 04:10:29 -- nvmf/common.sh@295 -- # local -ga e810 00:13:14.783 04:10:29 -- nvmf/common.sh@296 -- # x722=() 00:13:14.783 04:10:29 -- nvmf/common.sh@296 -- # local -ga x722 00:13:14.783 04:10:29 -- nvmf/common.sh@297 -- # mlx=() 00:13:14.783 04:10:29 -- nvmf/common.sh@297 -- # local -ga mlx 00:13:14.783 04:10:29 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:14.783 04:10:29 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:14.783 04:10:29 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:14.783 04:10:29 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:14.783 04:10:29 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:14.783 04:10:29 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:14.783 04:10:29 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:14.783 04:10:29 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:14.783 04:10:29 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:14.783 04:10:29 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:14.783 04:10:29 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:14.783 04:10:29 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:13:14.783 04:10:29 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:13:14.783 04:10:29 -- nvmf/common.sh@326 -- # [[ '' == mlx5 ]] 00:13:14.783 04:10:29 -- nvmf/common.sh@328 -- # [[ '' == e810 ]] 00:13:14.783 04:10:29 -- nvmf/common.sh@330 -- # [[ '' == x722 ]] 00:13:14.783 04:10:29 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:13:14.783 04:10:29 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:13:14.783 04:10:29 -- nvmf/common.sh@340 -- # echo 'Found 0000:27:00.0 (0x8086 - 0x159b)' 00:13:14.783 Found 0000:27:00.0 (0x8086 - 0x159b) 00:13:14.783 04:10:29 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:13:14.783 04:10:29 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:13:14.783 04:10:29 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:14.783 04:10:29 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:14.783 04:10:29 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:13:14.783 04:10:29 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:13:14.783 04:10:29 -- nvmf/common.sh@340 -- # echo 'Found 0000:27:00.1 (0x8086 - 0x159b)' 00:13:14.783 Found 0000:27:00.1 (0x8086 - 0x159b) 00:13:14.783 04:10:29 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:13:14.783 04:10:29 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:13:14.783 04:10:29 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:14.783 04:10:29 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:14.783 04:10:29 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:13:14.783 04:10:29 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:13:14.783 04:10:29 -- nvmf/common.sh@371 -- # [[ '' == e810 ]] 00:13:14.783 04:10:29 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:13:14.783 04:10:29 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:14.783 04:10:29 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:13:14.783 04:10:29 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:14.783 04:10:29 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:27:00.0: cvl_0_0' 00:13:14.783 Found net devices under 0000:27:00.0: cvl_0_0 00:13:14.783 04:10:29 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:13:14.783 04:10:29 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:13:14.783 04:10:29 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:14.783 04:10:29 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:13:14.783 04:10:29 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:14.783 04:10:29 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:27:00.1: cvl_0_1' 00:13:14.783 Found net devices under 0000:27:00.1: cvl_0_1 00:13:14.783 04:10:29 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:13:14.783 04:10:29 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:13:14.783 04:10:29 -- nvmf/common.sh@402 -- # is_hw=yes 00:13:14.783 04:10:29 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:13:14.783 04:10:29 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:13:14.783 04:10:29 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:13:14.783 04:10:29 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:14.783 04:10:29 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:14.783 04:10:29 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:14.783 04:10:29 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:13:14.783 04:10:29 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:14.783 04:10:29 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:14.783 04:10:29 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:13:14.783 04:10:29 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:14.783 04:10:29 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:14.783 04:10:29 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:13:14.784 04:10:29 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:13:14.784 04:10:29 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:13:14.784 04:10:29 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:14.784 04:10:29 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:14.784 04:10:29 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:14.784 04:10:29 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:13:14.784 04:10:29 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:15.044 04:10:29 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:15.044 04:10:29 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:15.044 04:10:29 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:13:15.044 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:15.044 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.509 ms 00:13:15.044 00:13:15.044 --- 10.0.0.2 ping statistics --- 00:13:15.044 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:15.044 rtt min/avg/max/mdev = 0.509/0.509/0.509/0.000 ms 00:13:15.044 04:10:29 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:15.044 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:15.044 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.374 ms 00:13:15.044 00:13:15.044 --- 10.0.0.1 ping statistics --- 00:13:15.044 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:15.044 rtt min/avg/max/mdev = 0.374/0.374/0.374/0.000 ms 00:13:15.044 04:10:29 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:15.044 04:10:29 -- nvmf/common.sh@410 -- # return 0 00:13:15.044 04:10:29 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:13:15.044 04:10:29 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:15.044 04:10:29 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:13:15.044 04:10:29 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:13:15.044 04:10:29 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:15.045 04:10:29 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:13:15.045 04:10:29 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:13:15.045 04:10:29 -- target/nvmf_example.sh@42 -- # nvmfexamplestart '-m 0xF' 00:13:15.045 04:10:29 -- target/nvmf_example.sh@27 -- # timing_enter start_nvmf_example 00:13:15.045 04:10:29 -- common/autotest_common.sh@712 -- # xtrace_disable 00:13:15.045 04:10:29 -- common/autotest_common.sh@10 -- # set +x 00:13:15.045 04:10:29 -- target/nvmf_example.sh@29 -- # '[' tcp == tcp ']' 00:13:15.045 04:10:29 -- target/nvmf_example.sh@30 -- # NVMF_EXAMPLE=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_EXAMPLE[@]}") 00:13:15.045 04:10:29 -- target/nvmf_example.sh@34 -- # nvmfpid=3902238 00:13:15.045 04:10:29 -- target/nvmf_example.sh@35 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:13:15.045 04:10:29 -- target/nvmf_example.sh@36 -- # waitforlisten 3902238 00:13:15.045 04:10:29 -- common/autotest_common.sh@819 -- # '[' -z 3902238 ']' 00:13:15.045 04:10:29 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:15.045 04:10:29 -- common/autotest_common.sh@824 -- # local max_retries=100 00:13:15.045 04:10:29 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:15.045 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:15.045 04:10:29 -- common/autotest_common.sh@828 -- # xtrace_disable 00:13:15.045 04:10:29 -- common/autotest_common.sh@10 -- # set +x 00:13:15.045 04:10:29 -- target/nvmf_example.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/nvmf -i 0 -g 10000 -m 0xF 00:13:15.304 EAL: No free 2048 kB hugepages reported on node 1 00:13:15.875 04:10:30 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:13:15.875 04:10:30 -- common/autotest_common.sh@852 -- # return 0 00:13:15.875 04:10:30 -- target/nvmf_example.sh@37 -- # timing_exit start_nvmf_example 00:13:15.875 04:10:30 -- common/autotest_common.sh@718 -- # xtrace_disable 00:13:15.875 04:10:30 -- common/autotest_common.sh@10 -- # set +x 00:13:15.875 04:10:30 -- target/nvmf_example.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:15.875 04:10:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:15.875 04:10:30 -- common/autotest_common.sh@10 -- # set +x 00:13:15.875 04:10:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:15.875 04:10:30 -- target/nvmf_example.sh@47 -- # rpc_cmd bdev_malloc_create 64 512 00:13:15.875 04:10:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:15.875 04:10:30 -- common/autotest_common.sh@10 -- # set +x 00:13:15.875 04:10:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:15.875 04:10:30 -- target/nvmf_example.sh@47 -- # malloc_bdevs='Malloc0 ' 00:13:15.875 04:10:30 -- target/nvmf_example.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:13:15.875 04:10:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:15.875 04:10:30 -- common/autotest_common.sh@10 -- # set +x 00:13:15.875 04:10:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:15.875 04:10:30 -- target/nvmf_example.sh@52 -- # for malloc_bdev in $malloc_bdevs 00:13:15.875 04:10:30 -- target/nvmf_example.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:13:15.875 04:10:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:15.875 04:10:30 -- common/autotest_common.sh@10 -- # set +x 00:13:15.875 04:10:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:15.875 04:10:30 -- target/nvmf_example.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:15.875 04:10:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:15.875 04:10:30 -- common/autotest_common.sh@10 -- # set +x 00:13:15.875 04:10:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:15.875 04:10:30 -- target/nvmf_example.sh@59 -- # perf=/var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:13:15.875 04:10:30 -- target/nvmf_example.sh@61 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:13:16.136 EAL: No free 2048 kB hugepages reported on node 1 00:13:26.198 Initializing NVMe Controllers 00:13:26.198 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:13:26.198 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:13:26.198 Initialization complete. Launching workers. 00:13:26.198 ======================================================== 00:13:26.198 Latency(us) 00:13:26.198 Device Information : IOPS MiB/s Average min max 00:13:26.198 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 19120.35 74.69 3346.84 698.05 15505.95 00:13:26.198 ======================================================== 00:13:26.198 Total : 19120.35 74.69 3346.84 698.05 15505.95 00:13:26.198 00:13:26.198 04:10:40 -- target/nvmf_example.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:13:26.198 04:10:40 -- target/nvmf_example.sh@66 -- # nvmftestfini 00:13:26.198 04:10:40 -- nvmf/common.sh@476 -- # nvmfcleanup 00:13:26.198 04:10:40 -- nvmf/common.sh@116 -- # sync 00:13:26.198 04:10:40 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:13:26.198 04:10:40 -- nvmf/common.sh@119 -- # set +e 00:13:26.198 04:10:40 -- nvmf/common.sh@120 -- # for i in {1..20} 00:13:26.198 04:10:40 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:13:26.198 rmmod nvme_tcp 00:13:26.457 rmmod nvme_fabrics 00:13:26.457 rmmod nvme_keyring 00:13:26.457 04:10:40 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:13:26.457 04:10:40 -- nvmf/common.sh@123 -- # set -e 00:13:26.457 04:10:40 -- nvmf/common.sh@124 -- # return 0 00:13:26.457 04:10:40 -- nvmf/common.sh@477 -- # '[' -n 3902238 ']' 00:13:26.457 04:10:40 -- nvmf/common.sh@478 -- # killprocess 3902238 00:13:26.457 04:10:40 -- common/autotest_common.sh@926 -- # '[' -z 3902238 ']' 00:13:26.457 04:10:40 -- common/autotest_common.sh@930 -- # kill -0 3902238 00:13:26.457 04:10:40 -- common/autotest_common.sh@931 -- # uname 00:13:26.457 04:10:40 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:13:26.457 04:10:40 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3902238 00:13:26.457 04:10:40 -- common/autotest_common.sh@932 -- # process_name=nvmf 00:13:26.457 04:10:40 -- common/autotest_common.sh@936 -- # '[' nvmf = sudo ']' 00:13:26.457 04:10:40 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3902238' 00:13:26.457 killing process with pid 3902238 00:13:26.457 04:10:40 -- common/autotest_common.sh@945 -- # kill 3902238 00:13:26.457 04:10:40 -- common/autotest_common.sh@950 -- # wait 3902238 00:13:27.026 nvmf threads initialize successfully 00:13:27.026 bdev subsystem init successfully 00:13:27.026 created a nvmf target service 00:13:27.026 create targets's poll groups done 00:13:27.026 all subsystems of target started 00:13:27.026 nvmf target is running 00:13:27.026 all subsystems of target stopped 00:13:27.026 destroy targets's poll groups done 00:13:27.026 destroyed the nvmf target service 00:13:27.026 bdev subsystem finish successfully 00:13:27.026 nvmf threads destroy successfully 00:13:27.026 04:10:41 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:13:27.026 04:10:41 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:13:27.026 04:10:41 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:13:27.026 04:10:41 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:27.026 04:10:41 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:13:27.026 04:10:41 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:27.026 04:10:41 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:27.026 04:10:41 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:28.925 04:10:43 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:13:28.925 04:10:43 -- target/nvmf_example.sh@67 -- # timing_exit nvmf_example_test 00:13:28.925 04:10:43 -- common/autotest_common.sh@718 -- # xtrace_disable 00:13:28.925 04:10:43 -- common/autotest_common.sh@10 -- # set +x 00:13:28.925 00:13:28.925 real 0m19.484s 00:13:28.925 user 0m47.053s 00:13:28.925 sys 0m5.074s 00:13:28.925 04:10:43 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:28.925 04:10:43 -- common/autotest_common.sh@10 -- # set +x 00:13:28.925 ************************************ 00:13:28.925 END TEST nvmf_example 00:13:28.925 ************************************ 00:13:28.925 04:10:43 -- nvmf/nvmf.sh@24 -- # run_test nvmf_filesystem /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:13:28.925 04:10:43 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:13:28.925 04:10:43 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:13:28.925 04:10:43 -- common/autotest_common.sh@10 -- # set +x 00:13:28.925 ************************************ 00:13:28.925 START TEST nvmf_filesystem 00:13:28.925 ************************************ 00:13:28.925 04:10:43 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:13:29.186 * Looking for test storage... 00:13:29.186 * Found test storage at /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target 00:13:29.186 04:10:43 -- target/filesystem.sh@9 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/test/common/autotest_common.sh 00:13:29.186 04:10:43 -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:13:29.186 04:10:43 -- common/autotest_common.sh@34 -- # set -e 00:13:29.186 04:10:43 -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:13:29.186 04:10:43 -- common/autotest_common.sh@36 -- # shopt -s extglob 00:13:29.186 04:10:43 -- common/autotest_common.sh@38 -- # [[ -e /var/jenkins/workspace/dsa-phy-autotest/spdk/test/common/build_config.sh ]] 00:13:29.186 04:10:43 -- common/autotest_common.sh@39 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/test/common/build_config.sh 00:13:29.186 04:10:43 -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:13:29.186 04:10:43 -- common/build_config.sh@2 -- # CONFIG_ASAN=y 00:13:29.186 04:10:43 -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:13:29.186 04:10:43 -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:13:29.186 04:10:43 -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:13:29.186 04:10:43 -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:13:29.186 04:10:43 -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:13:29.186 04:10:43 -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:13:29.187 04:10:43 -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:13:29.187 04:10:43 -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:13:29.187 04:10:43 -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:13:29.187 04:10:43 -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:13:29.187 04:10:43 -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:13:29.187 04:10:43 -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:13:29.187 04:10:43 -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:13:29.187 04:10:43 -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:13:29.187 04:10:43 -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:13:29.187 04:10:43 -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:13:29.187 04:10:43 -- common/build_config.sh@19 -- # CONFIG_ENV=/var/jenkins/workspace/dsa-phy-autotest/spdk/lib/env_dpdk 00:13:29.187 04:10:43 -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:13:29.187 04:10:43 -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:13:29.187 04:10:43 -- common/build_config.sh@22 -- # CONFIG_CET=n 00:13:29.187 04:10:43 -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:13:29.187 04:10:43 -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:13:29.187 04:10:43 -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:13:29.187 04:10:43 -- common/build_config.sh@26 -- # CONFIG_HAVE_ARC4RANDOM=y 00:13:29.187 04:10:43 -- common/build_config.sh@27 -- # CONFIG_HAVE_LIBARCHIVE=n 00:13:29.187 04:10:43 -- common/build_config.sh@28 -- # CONFIG_UBLK=y 00:13:29.187 04:10:43 -- common/build_config.sh@29 -- # CONFIG_ISAL_CRYPTO=y 00:13:29.187 04:10:43 -- common/build_config.sh@30 -- # CONFIG_OPENSSL_PATH= 00:13:29.187 04:10:43 -- common/build_config.sh@31 -- # CONFIG_OCF=n 00:13:29.187 04:10:43 -- common/build_config.sh@32 -- # CONFIG_FUSE=n 00:13:29.187 04:10:43 -- common/build_config.sh@33 -- # CONFIG_VTUNE_DIR= 00:13:29.187 04:10:43 -- common/build_config.sh@34 -- # CONFIG_FUZZER_LIB= 00:13:29.187 04:10:43 -- common/build_config.sh@35 -- # CONFIG_FUZZER=n 00:13:29.187 04:10:43 -- common/build_config.sh@36 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/dsa-phy-autotest/spdk/dpdk/build 00:13:29.187 04:10:43 -- common/build_config.sh@37 -- # CONFIG_CRYPTO=n 00:13:29.187 04:10:43 -- common/build_config.sh@38 -- # CONFIG_PGO_USE=n 00:13:29.187 04:10:43 -- common/build_config.sh@39 -- # CONFIG_VHOST=y 00:13:29.187 04:10:43 -- common/build_config.sh@40 -- # CONFIG_DAOS=n 00:13:29.187 04:10:43 -- common/build_config.sh@41 -- # CONFIG_DPDK_INC_DIR= 00:13:29.187 04:10:43 -- common/build_config.sh@42 -- # CONFIG_DAOS_DIR= 00:13:29.187 04:10:43 -- common/build_config.sh@43 -- # CONFIG_UNIT_TESTS=n 00:13:29.187 04:10:43 -- common/build_config.sh@44 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:13:29.187 04:10:43 -- common/build_config.sh@45 -- # CONFIG_VIRTIO=y 00:13:29.187 04:10:43 -- common/build_config.sh@46 -- # CONFIG_COVERAGE=y 00:13:29.187 04:10:43 -- common/build_config.sh@47 -- # CONFIG_RDMA=y 00:13:29.187 04:10:43 -- common/build_config.sh@48 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:13:29.187 04:10:43 -- common/build_config.sh@49 -- # CONFIG_URING_PATH= 00:13:29.187 04:10:43 -- common/build_config.sh@50 -- # CONFIG_XNVME=n 00:13:29.187 04:10:43 -- common/build_config.sh@51 -- # CONFIG_VFIO_USER=n 00:13:29.187 04:10:43 -- common/build_config.sh@52 -- # CONFIG_ARCH=native 00:13:29.187 04:10:43 -- common/build_config.sh@53 -- # CONFIG_URING_ZNS=n 00:13:29.187 04:10:43 -- common/build_config.sh@54 -- # CONFIG_WERROR=y 00:13:29.187 04:10:43 -- common/build_config.sh@55 -- # CONFIG_HAVE_LIBBSD=n 00:13:29.187 04:10:43 -- common/build_config.sh@56 -- # CONFIG_UBSAN=y 00:13:29.187 04:10:43 -- common/build_config.sh@57 -- # CONFIG_IPSEC_MB_DIR= 00:13:29.187 04:10:43 -- common/build_config.sh@58 -- # CONFIG_GOLANG=n 00:13:29.187 04:10:43 -- common/build_config.sh@59 -- # CONFIG_ISAL=y 00:13:29.187 04:10:43 -- common/build_config.sh@60 -- # CONFIG_IDXD_KERNEL=n 00:13:29.187 04:10:43 -- common/build_config.sh@61 -- # CONFIG_DPDK_LIB_DIR= 00:13:29.187 04:10:43 -- common/build_config.sh@62 -- # CONFIG_RDMA_PROV=verbs 00:13:29.187 04:10:43 -- common/build_config.sh@63 -- # CONFIG_APPS=y 00:13:29.187 04:10:43 -- common/build_config.sh@64 -- # CONFIG_SHARED=y 00:13:29.187 04:10:43 -- common/build_config.sh@65 -- # CONFIG_FC_PATH= 00:13:29.187 04:10:43 -- common/build_config.sh@66 -- # CONFIG_DPDK_PKG_CONFIG=n 00:13:29.187 04:10:43 -- common/build_config.sh@67 -- # CONFIG_FC=n 00:13:29.187 04:10:43 -- common/build_config.sh@68 -- # CONFIG_AVAHI=n 00:13:29.187 04:10:43 -- common/build_config.sh@69 -- # CONFIG_FIO_PLUGIN=y 00:13:29.187 04:10:43 -- common/build_config.sh@70 -- # CONFIG_RAID5F=n 00:13:29.187 04:10:43 -- common/build_config.sh@71 -- # CONFIG_EXAMPLES=y 00:13:29.187 04:10:43 -- common/build_config.sh@72 -- # CONFIG_TESTS=y 00:13:29.187 04:10:43 -- common/build_config.sh@73 -- # CONFIG_CRYPTO_MLX5=n 00:13:29.187 04:10:43 -- common/build_config.sh@74 -- # CONFIG_MAX_LCORES= 00:13:29.187 04:10:43 -- common/build_config.sh@75 -- # CONFIG_IPSEC_MB=n 00:13:29.187 04:10:43 -- common/build_config.sh@76 -- # CONFIG_DEBUG=y 00:13:29.187 04:10:43 -- common/build_config.sh@77 -- # CONFIG_DPDK_COMPRESSDEV=n 00:13:29.187 04:10:43 -- common/build_config.sh@78 -- # CONFIG_CROSS_PREFIX= 00:13:29.187 04:10:43 -- common/build_config.sh@79 -- # CONFIG_URING=n 00:13:29.187 04:10:43 -- common/autotest_common.sh@48 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/test/common/applications.sh 00:13:29.187 04:10:43 -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/dsa-phy-autotest/spdk/test/common/applications.sh 00:13:29.187 04:10:43 -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/dsa-phy-autotest/spdk/test/common 00:13:29.187 04:10:43 -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/dsa-phy-autotest/spdk/test/common 00:13:29.187 04:10:43 -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/dsa-phy-autotest/spdk 00:13:29.187 04:10:43 -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin 00:13:29.187 04:10:43 -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/dsa-phy-autotest/spdk/test/app 00:13:29.187 04:10:43 -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples 00:13:29.187 04:10:43 -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:13:29.187 04:10:43 -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:13:29.187 04:10:43 -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:13:29.187 04:10:43 -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:13:29.187 04:10:43 -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:13:29.187 04:10:43 -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:13:29.187 04:10:43 -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/dsa-phy-autotest/spdk/include/spdk/config.h ]] 00:13:29.187 04:10:43 -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:13:29.187 #define SPDK_CONFIG_H 00:13:29.187 #define SPDK_CONFIG_APPS 1 00:13:29.187 #define SPDK_CONFIG_ARCH native 00:13:29.187 #define SPDK_CONFIG_ASAN 1 00:13:29.187 #undef SPDK_CONFIG_AVAHI 00:13:29.187 #undef SPDK_CONFIG_CET 00:13:29.187 #define SPDK_CONFIG_COVERAGE 1 00:13:29.187 #define SPDK_CONFIG_CROSS_PREFIX 00:13:29.187 #undef SPDK_CONFIG_CRYPTO 00:13:29.187 #undef SPDK_CONFIG_CRYPTO_MLX5 00:13:29.187 #undef SPDK_CONFIG_CUSTOMOCF 00:13:29.187 #undef SPDK_CONFIG_DAOS 00:13:29.187 #define SPDK_CONFIG_DAOS_DIR 00:13:29.187 #define SPDK_CONFIG_DEBUG 1 00:13:29.187 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:13:29.187 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/dsa-phy-autotest/spdk/dpdk/build 00:13:29.187 #define SPDK_CONFIG_DPDK_INC_DIR 00:13:29.187 #define SPDK_CONFIG_DPDK_LIB_DIR 00:13:29.187 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:13:29.187 #define SPDK_CONFIG_ENV /var/jenkins/workspace/dsa-phy-autotest/spdk/lib/env_dpdk 00:13:29.187 #define SPDK_CONFIG_EXAMPLES 1 00:13:29.187 #undef SPDK_CONFIG_FC 00:13:29.187 #define SPDK_CONFIG_FC_PATH 00:13:29.187 #define SPDK_CONFIG_FIO_PLUGIN 1 00:13:29.187 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:13:29.187 #undef SPDK_CONFIG_FUSE 00:13:29.187 #undef SPDK_CONFIG_FUZZER 00:13:29.187 #define SPDK_CONFIG_FUZZER_LIB 00:13:29.187 #undef SPDK_CONFIG_GOLANG 00:13:29.187 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:13:29.187 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:13:29.187 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:13:29.187 #undef SPDK_CONFIG_HAVE_LIBBSD 00:13:29.187 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:13:29.187 #define SPDK_CONFIG_IDXD 1 00:13:29.187 #undef SPDK_CONFIG_IDXD_KERNEL 00:13:29.187 #undef SPDK_CONFIG_IPSEC_MB 00:13:29.187 #define SPDK_CONFIG_IPSEC_MB_DIR 00:13:29.187 #define SPDK_CONFIG_ISAL 1 00:13:29.187 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:13:29.187 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:13:29.187 #define SPDK_CONFIG_LIBDIR 00:13:29.187 #undef SPDK_CONFIG_LTO 00:13:29.187 #define SPDK_CONFIG_MAX_LCORES 00:13:29.187 #define SPDK_CONFIG_NVME_CUSE 1 00:13:29.187 #undef SPDK_CONFIG_OCF 00:13:29.187 #define SPDK_CONFIG_OCF_PATH 00:13:29.187 #define SPDK_CONFIG_OPENSSL_PATH 00:13:29.187 #undef SPDK_CONFIG_PGO_CAPTURE 00:13:29.187 #undef SPDK_CONFIG_PGO_USE 00:13:29.187 #define SPDK_CONFIG_PREFIX /usr/local 00:13:29.187 #undef SPDK_CONFIG_RAID5F 00:13:29.187 #undef SPDK_CONFIG_RBD 00:13:29.187 #define SPDK_CONFIG_RDMA 1 00:13:29.187 #define SPDK_CONFIG_RDMA_PROV verbs 00:13:29.187 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:13:29.187 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:13:29.187 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:13:29.187 #define SPDK_CONFIG_SHARED 1 00:13:29.187 #undef SPDK_CONFIG_SMA 00:13:29.187 #define SPDK_CONFIG_TESTS 1 00:13:29.187 #undef SPDK_CONFIG_TSAN 00:13:29.187 #define SPDK_CONFIG_UBLK 1 00:13:29.187 #define SPDK_CONFIG_UBSAN 1 00:13:29.187 #undef SPDK_CONFIG_UNIT_TESTS 00:13:29.187 #undef SPDK_CONFIG_URING 00:13:29.187 #define SPDK_CONFIG_URING_PATH 00:13:29.187 #undef SPDK_CONFIG_URING_ZNS 00:13:29.187 #undef SPDK_CONFIG_USDT 00:13:29.187 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:13:29.187 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:13:29.187 #undef SPDK_CONFIG_VFIO_USER 00:13:29.187 #define SPDK_CONFIG_VFIO_USER_DIR 00:13:29.187 #define SPDK_CONFIG_VHOST 1 00:13:29.187 #define SPDK_CONFIG_VIRTIO 1 00:13:29.187 #undef SPDK_CONFIG_VTUNE 00:13:29.187 #define SPDK_CONFIG_VTUNE_DIR 00:13:29.187 #define SPDK_CONFIG_WERROR 1 00:13:29.187 #define SPDK_CONFIG_WPDK_DIR 00:13:29.188 #undef SPDK_CONFIG_XNVME 00:13:29.188 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:13:29.188 04:10:43 -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:13:29.188 04:10:43 -- common/autotest_common.sh@49 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/common.sh 00:13:29.188 04:10:43 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:29.188 04:10:43 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:29.188 04:10:43 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:29.188 04:10:43 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:29.188 04:10:43 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:29.188 04:10:43 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:29.188 04:10:43 -- paths/export.sh@5 -- # export PATH 00:13:29.188 04:10:43 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:29.188 04:10:43 -- common/autotest_common.sh@50 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/perf/pm/common 00:13:29.188 04:10:43 -- pm/common@6 -- # dirname /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/perf/pm/common 00:13:29.188 04:10:43 -- pm/common@6 -- # readlink -f /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/perf/pm 00:13:29.188 04:10:43 -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/perf/pm 00:13:29.188 04:10:43 -- pm/common@7 -- # readlink -f /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/perf/pm/../../../ 00:13:29.188 04:10:43 -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/dsa-phy-autotest/spdk 00:13:29.188 04:10:43 -- pm/common@16 -- # TEST_TAG=N/A 00:13:29.188 04:10:43 -- pm/common@17 -- # TEST_TAG_FILE=/var/jenkins/workspace/dsa-phy-autotest/spdk/.run_test_name 00:13:29.188 04:10:43 -- common/autotest_common.sh@52 -- # : 1 00:13:29.188 04:10:43 -- common/autotest_common.sh@53 -- # export RUN_NIGHTLY 00:13:29.188 04:10:43 -- common/autotest_common.sh@56 -- # : 0 00:13:29.188 04:10:43 -- common/autotest_common.sh@57 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:13:29.188 04:10:43 -- common/autotest_common.sh@58 -- # : 0 00:13:29.188 04:10:43 -- common/autotest_common.sh@59 -- # export SPDK_RUN_VALGRIND 00:13:29.188 04:10:43 -- common/autotest_common.sh@60 -- # : 1 00:13:29.188 04:10:43 -- common/autotest_common.sh@61 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:13:29.188 04:10:43 -- common/autotest_common.sh@62 -- # : 0 00:13:29.188 04:10:43 -- common/autotest_common.sh@63 -- # export SPDK_TEST_UNITTEST 00:13:29.188 04:10:43 -- common/autotest_common.sh@64 -- # : 00:13:29.188 04:10:43 -- common/autotest_common.sh@65 -- # export SPDK_TEST_AUTOBUILD 00:13:29.188 04:10:43 -- common/autotest_common.sh@66 -- # : 0 00:13:29.188 04:10:43 -- common/autotest_common.sh@67 -- # export SPDK_TEST_RELEASE_BUILD 00:13:29.188 04:10:43 -- common/autotest_common.sh@68 -- # : 0 00:13:29.188 04:10:43 -- common/autotest_common.sh@69 -- # export SPDK_TEST_ISAL 00:13:29.188 04:10:43 -- common/autotest_common.sh@70 -- # : 0 00:13:29.188 04:10:43 -- common/autotest_common.sh@71 -- # export SPDK_TEST_ISCSI 00:13:29.188 04:10:43 -- common/autotest_common.sh@72 -- # : 0 00:13:29.188 04:10:43 -- common/autotest_common.sh@73 -- # export SPDK_TEST_ISCSI_INITIATOR 00:13:29.188 04:10:43 -- common/autotest_common.sh@74 -- # : 0 00:13:29.188 04:10:43 -- common/autotest_common.sh@75 -- # export SPDK_TEST_NVME 00:13:29.188 04:10:43 -- common/autotest_common.sh@76 -- # : 0 00:13:29.188 04:10:43 -- common/autotest_common.sh@77 -- # export SPDK_TEST_NVME_PMR 00:13:29.188 04:10:43 -- common/autotest_common.sh@78 -- # : 0 00:13:29.188 04:10:43 -- common/autotest_common.sh@79 -- # export SPDK_TEST_NVME_BP 00:13:29.188 04:10:43 -- common/autotest_common.sh@80 -- # : 0 00:13:29.188 04:10:43 -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME_CLI 00:13:29.188 04:10:43 -- common/autotest_common.sh@82 -- # : 0 00:13:29.188 04:10:43 -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_CUSE 00:13:29.188 04:10:43 -- common/autotest_common.sh@84 -- # : 0 00:13:29.188 04:10:43 -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_FDP 00:13:29.188 04:10:43 -- common/autotest_common.sh@86 -- # : 1 00:13:29.188 04:10:43 -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVMF 00:13:29.188 04:10:43 -- common/autotest_common.sh@88 -- # : 0 00:13:29.188 04:10:43 -- common/autotest_common.sh@89 -- # export SPDK_TEST_VFIOUSER 00:13:29.188 04:10:43 -- common/autotest_common.sh@90 -- # : 0 00:13:29.188 04:10:43 -- common/autotest_common.sh@91 -- # export SPDK_TEST_VFIOUSER_QEMU 00:13:29.188 04:10:43 -- common/autotest_common.sh@92 -- # : 0 00:13:29.188 04:10:43 -- common/autotest_common.sh@93 -- # export SPDK_TEST_FUZZER 00:13:29.188 04:10:43 -- common/autotest_common.sh@94 -- # : 0 00:13:29.188 04:10:43 -- common/autotest_common.sh@95 -- # export SPDK_TEST_FUZZER_SHORT 00:13:29.188 04:10:43 -- common/autotest_common.sh@96 -- # : tcp 00:13:29.188 04:10:43 -- common/autotest_common.sh@97 -- # export SPDK_TEST_NVMF_TRANSPORT 00:13:29.188 04:10:43 -- common/autotest_common.sh@98 -- # : 0 00:13:29.188 04:10:43 -- common/autotest_common.sh@99 -- # export SPDK_TEST_RBD 00:13:29.188 04:10:43 -- common/autotest_common.sh@100 -- # : 0 00:13:29.188 04:10:43 -- common/autotest_common.sh@101 -- # export SPDK_TEST_VHOST 00:13:29.188 04:10:43 -- common/autotest_common.sh@102 -- # : 0 00:13:29.188 04:10:43 -- common/autotest_common.sh@103 -- # export SPDK_TEST_BLOCKDEV 00:13:29.188 04:10:43 -- common/autotest_common.sh@104 -- # : 0 00:13:29.188 04:10:43 -- common/autotest_common.sh@105 -- # export SPDK_TEST_IOAT 00:13:29.188 04:10:43 -- common/autotest_common.sh@106 -- # : 0 00:13:29.188 04:10:43 -- common/autotest_common.sh@107 -- # export SPDK_TEST_BLOBFS 00:13:29.188 04:10:43 -- common/autotest_common.sh@108 -- # : 0 00:13:29.188 04:10:43 -- common/autotest_common.sh@109 -- # export SPDK_TEST_VHOST_INIT 00:13:29.188 04:10:43 -- common/autotest_common.sh@110 -- # : 0 00:13:29.188 04:10:43 -- common/autotest_common.sh@111 -- # export SPDK_TEST_LVOL 00:13:29.188 04:10:43 -- common/autotest_common.sh@112 -- # : 0 00:13:29.188 04:10:43 -- common/autotest_common.sh@113 -- # export SPDK_TEST_VBDEV_COMPRESS 00:13:29.188 04:10:43 -- common/autotest_common.sh@114 -- # : 1 00:13:29.188 04:10:43 -- common/autotest_common.sh@115 -- # export SPDK_RUN_ASAN 00:13:29.188 04:10:43 -- common/autotest_common.sh@116 -- # : 1 00:13:29.188 04:10:43 -- common/autotest_common.sh@117 -- # export SPDK_RUN_UBSAN 00:13:29.188 04:10:43 -- common/autotest_common.sh@118 -- # : 00:13:29.188 04:10:43 -- common/autotest_common.sh@119 -- # export SPDK_RUN_EXTERNAL_DPDK 00:13:29.188 04:10:43 -- common/autotest_common.sh@120 -- # : 0 00:13:29.188 04:10:43 -- common/autotest_common.sh@121 -- # export SPDK_RUN_NON_ROOT 00:13:29.188 04:10:43 -- common/autotest_common.sh@122 -- # : 0 00:13:29.188 04:10:43 -- common/autotest_common.sh@123 -- # export SPDK_TEST_CRYPTO 00:13:29.188 04:10:43 -- common/autotest_common.sh@124 -- # : 0 00:13:29.188 04:10:43 -- common/autotest_common.sh@125 -- # export SPDK_TEST_FTL 00:13:29.188 04:10:43 -- common/autotest_common.sh@126 -- # : 0 00:13:29.188 04:10:43 -- common/autotest_common.sh@127 -- # export SPDK_TEST_OCF 00:13:29.188 04:10:43 -- common/autotest_common.sh@128 -- # : 0 00:13:29.188 04:10:43 -- common/autotest_common.sh@129 -- # export SPDK_TEST_VMD 00:13:29.188 04:10:43 -- common/autotest_common.sh@130 -- # : 0 00:13:29.188 04:10:43 -- common/autotest_common.sh@131 -- # export SPDK_TEST_OPAL 00:13:29.188 04:10:43 -- common/autotest_common.sh@132 -- # : 00:13:29.188 04:10:43 -- common/autotest_common.sh@133 -- # export SPDK_TEST_NATIVE_DPDK 00:13:29.188 04:10:43 -- common/autotest_common.sh@134 -- # : true 00:13:29.188 04:10:43 -- common/autotest_common.sh@135 -- # export SPDK_AUTOTEST_X 00:13:29.188 04:10:43 -- common/autotest_common.sh@136 -- # : 0 00:13:29.188 04:10:43 -- common/autotest_common.sh@137 -- # export SPDK_TEST_RAID5 00:13:29.188 04:10:43 -- common/autotest_common.sh@138 -- # : 0 00:13:29.188 04:10:43 -- common/autotest_common.sh@139 -- # export SPDK_TEST_URING 00:13:29.188 04:10:43 -- common/autotest_common.sh@140 -- # : 0 00:13:29.188 04:10:43 -- common/autotest_common.sh@141 -- # export SPDK_TEST_USDT 00:13:29.188 04:10:43 -- common/autotest_common.sh@142 -- # : 0 00:13:29.188 04:10:43 -- common/autotest_common.sh@143 -- # export SPDK_TEST_USE_IGB_UIO 00:13:29.188 04:10:43 -- common/autotest_common.sh@144 -- # : 0 00:13:29.188 04:10:43 -- common/autotest_common.sh@145 -- # export SPDK_TEST_SCHEDULER 00:13:29.188 04:10:43 -- common/autotest_common.sh@146 -- # : 0 00:13:29.189 04:10:43 -- common/autotest_common.sh@147 -- # export SPDK_TEST_SCANBUILD 00:13:29.189 04:10:43 -- common/autotest_common.sh@148 -- # : 00:13:29.189 04:10:43 -- common/autotest_common.sh@149 -- # export SPDK_TEST_NVMF_NICS 00:13:29.189 04:10:43 -- common/autotest_common.sh@150 -- # : 0 00:13:29.189 04:10:43 -- common/autotest_common.sh@151 -- # export SPDK_TEST_SMA 00:13:29.189 04:10:43 -- common/autotest_common.sh@152 -- # : 0 00:13:29.189 04:10:43 -- common/autotest_common.sh@153 -- # export SPDK_TEST_DAOS 00:13:29.189 04:10:43 -- common/autotest_common.sh@154 -- # : 0 00:13:29.189 04:10:43 -- common/autotest_common.sh@155 -- # export SPDK_TEST_XNVME 00:13:29.189 04:10:43 -- common/autotest_common.sh@156 -- # : 1 00:13:29.189 04:10:43 -- common/autotest_common.sh@157 -- # export SPDK_TEST_ACCEL_DSA 00:13:29.189 04:10:43 -- common/autotest_common.sh@158 -- # : 1 00:13:29.189 04:10:43 -- common/autotest_common.sh@159 -- # export SPDK_TEST_ACCEL_IAA 00:13:29.189 04:10:43 -- common/autotest_common.sh@160 -- # : 0 00:13:29.189 04:10:43 -- common/autotest_common.sh@161 -- # export SPDK_TEST_ACCEL_IOAT 00:13:29.189 04:10:43 -- common/autotest_common.sh@163 -- # : 00:13:29.189 04:10:43 -- common/autotest_common.sh@164 -- # export SPDK_TEST_FUZZER_TARGET 00:13:29.189 04:10:43 -- common/autotest_common.sh@165 -- # : 0 00:13:29.189 04:10:43 -- common/autotest_common.sh@166 -- # export SPDK_TEST_NVMF_MDNS 00:13:29.189 04:10:43 -- common/autotest_common.sh@167 -- # : 0 00:13:29.189 04:10:43 -- common/autotest_common.sh@168 -- # export SPDK_JSONRPC_GO_CLIENT 00:13:29.189 04:10:43 -- common/autotest_common.sh@171 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/dsa-phy-autotest/spdk/build/lib 00:13:29.189 04:10:43 -- common/autotest_common.sh@171 -- # SPDK_LIB_DIR=/var/jenkins/workspace/dsa-phy-autotest/spdk/build/lib 00:13:29.189 04:10:43 -- common/autotest_common.sh@172 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/dsa-phy-autotest/spdk/dpdk/build/lib 00:13:29.189 04:10:43 -- common/autotest_common.sh@172 -- # DPDK_LIB_DIR=/var/jenkins/workspace/dsa-phy-autotest/spdk/dpdk/build/lib 00:13:29.189 04:10:43 -- common/autotest_common.sh@173 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/dsa-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:13:29.189 04:10:43 -- common/autotest_common.sh@173 -- # VFIO_LIB_DIR=/var/jenkins/workspace/dsa-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:13:29.189 04:10:43 -- common/autotest_common.sh@174 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/dsa-phy-autotest/spdk/build/lib:/var/jenkins/workspace/dsa-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/dsa-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/dsa-phy-autotest/spdk/build/lib:/var/jenkins/workspace/dsa-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/dsa-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/dsa-phy-autotest/spdk/build/lib:/var/jenkins/workspace/dsa-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/dsa-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/dsa-phy-autotest/spdk/build/lib:/var/jenkins/workspace/dsa-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/dsa-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:13:29.189 04:10:43 -- common/autotest_common.sh@174 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/dsa-phy-autotest/spdk/build/lib:/var/jenkins/workspace/dsa-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/dsa-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/dsa-phy-autotest/spdk/build/lib:/var/jenkins/workspace/dsa-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/dsa-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/dsa-phy-autotest/spdk/build/lib:/var/jenkins/workspace/dsa-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/dsa-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/dsa-phy-autotest/spdk/build/lib:/var/jenkins/workspace/dsa-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/dsa-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:13:29.189 04:10:43 -- common/autotest_common.sh@177 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:13:29.189 04:10:43 -- common/autotest_common.sh@177 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:13:29.189 04:10:43 -- common/autotest_common.sh@181 -- # export PYTHONPATH=:/var/jenkins/workspace/dsa-phy-autotest/spdk/python:/var/jenkins/workspace/dsa-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/dsa-phy-autotest/spdk/python:/var/jenkins/workspace/dsa-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/dsa-phy-autotest/spdk/python:/var/jenkins/workspace/dsa-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/dsa-phy-autotest/spdk/python 00:13:29.189 04:10:43 -- common/autotest_common.sh@181 -- # PYTHONPATH=:/var/jenkins/workspace/dsa-phy-autotest/spdk/python:/var/jenkins/workspace/dsa-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/dsa-phy-autotest/spdk/python:/var/jenkins/workspace/dsa-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/dsa-phy-autotest/spdk/python:/var/jenkins/workspace/dsa-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/dsa-phy-autotest/spdk/python 00:13:29.189 04:10:43 -- common/autotest_common.sh@185 -- # export PYTHONDONTWRITEBYTECODE=1 00:13:29.189 04:10:43 -- common/autotest_common.sh@185 -- # PYTHONDONTWRITEBYTECODE=1 00:13:29.189 04:10:43 -- common/autotest_common.sh@189 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:13:29.189 04:10:43 -- common/autotest_common.sh@189 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:13:29.189 04:10:43 -- common/autotest_common.sh@190 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:13:29.189 04:10:43 -- common/autotest_common.sh@190 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:13:29.189 04:10:43 -- common/autotest_common.sh@194 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:13:29.189 04:10:43 -- common/autotest_common.sh@195 -- # rm -rf /var/tmp/asan_suppression_file 00:13:29.189 04:10:43 -- common/autotest_common.sh@196 -- # cat 00:13:29.189 04:10:43 -- common/autotest_common.sh@222 -- # echo leak:libfuse3.so 00:13:29.189 04:10:43 -- common/autotest_common.sh@224 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:13:29.189 04:10:43 -- common/autotest_common.sh@224 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:13:29.189 04:10:43 -- common/autotest_common.sh@226 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:13:29.189 04:10:43 -- common/autotest_common.sh@226 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:13:29.189 04:10:43 -- common/autotest_common.sh@228 -- # '[' -z /var/spdk/dependencies ']' 00:13:29.189 04:10:43 -- common/autotest_common.sh@231 -- # export DEPENDENCY_DIR 00:13:29.189 04:10:43 -- common/autotest_common.sh@235 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin 00:13:29.189 04:10:43 -- common/autotest_common.sh@235 -- # SPDK_BIN_DIR=/var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin 00:13:29.189 04:10:43 -- common/autotest_common.sh@236 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples 00:13:29.189 04:10:43 -- common/autotest_common.sh@236 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples 00:13:29.189 04:10:43 -- common/autotest_common.sh@239 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:13:29.189 04:10:43 -- common/autotest_common.sh@239 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:13:29.189 04:10:43 -- common/autotest_common.sh@240 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:13:29.189 04:10:43 -- common/autotest_common.sh@240 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:13:29.189 04:10:43 -- common/autotest_common.sh@242 -- # export AR_TOOL=/var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:13:29.189 04:10:43 -- common/autotest_common.sh@242 -- # AR_TOOL=/var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:13:29.189 04:10:43 -- common/autotest_common.sh@245 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:13:29.189 04:10:43 -- common/autotest_common.sh@245 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:13:29.189 04:10:43 -- common/autotest_common.sh@248 -- # '[' 0 -eq 0 ']' 00:13:29.189 04:10:43 -- common/autotest_common.sh@249 -- # export valgrind= 00:13:29.189 04:10:43 -- common/autotest_common.sh@249 -- # valgrind= 00:13:29.189 04:10:43 -- common/autotest_common.sh@255 -- # uname -s 00:13:29.189 04:10:43 -- common/autotest_common.sh@255 -- # '[' Linux = Linux ']' 00:13:29.189 04:10:43 -- common/autotest_common.sh@256 -- # HUGEMEM=4096 00:13:29.189 04:10:43 -- common/autotest_common.sh@257 -- # export CLEAR_HUGE=yes 00:13:29.189 04:10:43 -- common/autotest_common.sh@257 -- # CLEAR_HUGE=yes 00:13:29.189 04:10:43 -- common/autotest_common.sh@258 -- # [[ 0 -eq 1 ]] 00:13:29.189 04:10:43 -- common/autotest_common.sh@258 -- # [[ 0 -eq 1 ]] 00:13:29.189 04:10:43 -- common/autotest_common.sh@265 -- # MAKE=make 00:13:29.189 04:10:43 -- common/autotest_common.sh@266 -- # MAKEFLAGS=-j128 00:13:29.189 04:10:43 -- common/autotest_common.sh@282 -- # export HUGEMEM=4096 00:13:29.189 04:10:43 -- common/autotest_common.sh@282 -- # HUGEMEM=4096 00:13:29.189 04:10:43 -- common/autotest_common.sh@284 -- # '[' -z /var/jenkins/workspace/dsa-phy-autotest/spdk/../output ']' 00:13:29.189 04:10:43 -- common/autotest_common.sh@289 -- # NO_HUGE=() 00:13:29.189 04:10:43 -- common/autotest_common.sh@290 -- # TEST_MODE= 00:13:29.189 04:10:43 -- common/autotest_common.sh@291 -- # for i in "$@" 00:13:29.189 04:10:43 -- common/autotest_common.sh@292 -- # case "$i" in 00:13:29.189 04:10:43 -- common/autotest_common.sh@297 -- # TEST_TRANSPORT=tcp 00:13:29.189 04:10:43 -- common/autotest_common.sh@309 -- # [[ -z 3905073 ]] 00:13:29.189 04:10:43 -- common/autotest_common.sh@309 -- # kill -0 3905073 00:13:29.189 04:10:43 -- common/autotest_common.sh@1665 -- # set_test_storage 2147483648 00:13:29.189 04:10:43 -- common/autotest_common.sh@319 -- # [[ -v testdir ]] 00:13:29.189 04:10:43 -- common/autotest_common.sh@321 -- # local requested_size=2147483648 00:13:29.189 04:10:43 -- common/autotest_common.sh@322 -- # local mount target_dir 00:13:29.189 04:10:43 -- common/autotest_common.sh@324 -- # local -A mounts fss sizes avails uses 00:13:29.189 04:10:43 -- common/autotest_common.sh@325 -- # local source fs size avail mount use 00:13:29.189 04:10:43 -- common/autotest_common.sh@327 -- # local storage_fallback storage_candidates 00:13:29.189 04:10:43 -- common/autotest_common.sh@329 -- # mktemp -udt spdk.XXXXXX 00:13:29.189 04:10:43 -- common/autotest_common.sh@329 -- # storage_fallback=/tmp/spdk.ykSni4 00:13:29.189 04:10:43 -- common/autotest_common.sh@334 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:13:29.189 04:10:43 -- common/autotest_common.sh@336 -- # [[ -n '' ]] 00:13:29.189 04:10:43 -- common/autotest_common.sh@341 -- # [[ -n '' ]] 00:13:29.189 04:10:43 -- common/autotest_common.sh@346 -- # mkdir -p /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target /tmp/spdk.ykSni4/tests/target /tmp/spdk.ykSni4 00:13:29.189 04:10:43 -- common/autotest_common.sh@349 -- # requested_size=2214592512 00:13:29.189 04:10:43 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:13:29.189 04:10:43 -- common/autotest_common.sh@318 -- # grep -v Filesystem 00:13:29.189 04:10:43 -- common/autotest_common.sh@318 -- # df -T 00:13:29.190 04:10:43 -- common/autotest_common.sh@352 -- # mounts["$mount"]=spdk_devtmpfs 00:13:29.190 04:10:43 -- common/autotest_common.sh@352 -- # fss["$mount"]=devtmpfs 00:13:29.190 04:10:43 -- common/autotest_common.sh@353 -- # avails["$mount"]=67108864 00:13:29.190 04:10:43 -- common/autotest_common.sh@353 -- # sizes["$mount"]=67108864 00:13:29.190 04:10:43 -- common/autotest_common.sh@354 -- # uses["$mount"]=0 00:13:29.190 04:10:43 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:13:29.190 04:10:43 -- common/autotest_common.sh@352 -- # mounts["$mount"]=/dev/pmem0 00:13:29.190 04:10:43 -- common/autotest_common.sh@352 -- # fss["$mount"]=ext2 00:13:29.190 04:10:43 -- common/autotest_common.sh@353 -- # avails["$mount"]=979206144 00:13:29.190 04:10:43 -- common/autotest_common.sh@353 -- # sizes["$mount"]=5284429824 00:13:29.190 04:10:43 -- common/autotest_common.sh@354 -- # uses["$mount"]=4305223680 00:13:29.190 04:10:43 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:13:29.190 04:10:43 -- common/autotest_common.sh@352 -- # mounts["$mount"]=spdk_root 00:13:29.190 04:10:43 -- common/autotest_common.sh@352 -- # fss["$mount"]=overlay 00:13:29.190 04:10:43 -- common/autotest_common.sh@353 -- # avails["$mount"]=258691805184 00:13:29.190 04:10:43 -- common/autotest_common.sh@353 -- # sizes["$mount"]=264763879424 00:13:29.190 04:10:43 -- common/autotest_common.sh@354 -- # uses["$mount"]=6072074240 00:13:29.190 04:10:43 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:13:29.190 04:10:43 -- common/autotest_common.sh@352 -- # mounts["$mount"]=tmpfs 00:13:29.190 04:10:43 -- common/autotest_common.sh@352 -- # fss["$mount"]=tmpfs 00:13:29.190 04:10:43 -- common/autotest_common.sh@353 -- # avails["$mount"]=132379344896 00:13:29.190 04:10:43 -- common/autotest_common.sh@353 -- # sizes["$mount"]=132381937664 00:13:29.190 04:10:43 -- common/autotest_common.sh@354 -- # uses["$mount"]=2592768 00:13:29.190 04:10:43 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:13:29.190 04:10:43 -- common/autotest_common.sh@352 -- # mounts["$mount"]=tmpfs 00:13:29.190 04:10:43 -- common/autotest_common.sh@352 -- # fss["$mount"]=tmpfs 00:13:29.190 04:10:43 -- common/autotest_common.sh@353 -- # avails["$mount"]=52943097856 00:13:29.190 04:10:43 -- common/autotest_common.sh@353 -- # sizes["$mount"]=52952776704 00:13:29.190 04:10:43 -- common/autotest_common.sh@354 -- # uses["$mount"]=9678848 00:13:29.190 04:10:43 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:13:29.190 04:10:43 -- common/autotest_common.sh@352 -- # mounts["$mount"]=efivarfs 00:13:29.190 04:10:43 -- common/autotest_common.sh@352 -- # fss["$mount"]=efivarfs 00:13:29.190 04:10:43 -- common/autotest_common.sh@353 -- # avails["$mount"]=197632 00:13:29.190 04:10:43 -- common/autotest_common.sh@353 -- # sizes["$mount"]=507904 00:13:29.190 04:10:43 -- common/autotest_common.sh@354 -- # uses["$mount"]=306176 00:13:29.190 04:10:43 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:13:29.190 04:10:43 -- common/autotest_common.sh@352 -- # mounts["$mount"]=tmpfs 00:13:29.190 04:10:43 -- common/autotest_common.sh@352 -- # fss["$mount"]=tmpfs 00:13:29.190 04:10:43 -- common/autotest_common.sh@353 -- # avails["$mount"]=132380495872 00:13:29.190 04:10:43 -- common/autotest_common.sh@353 -- # sizes["$mount"]=132381941760 00:13:29.190 04:10:43 -- common/autotest_common.sh@354 -- # uses["$mount"]=1445888 00:13:29.190 04:10:43 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:13:29.190 04:10:43 -- common/autotest_common.sh@352 -- # mounts["$mount"]=tmpfs 00:13:29.190 04:10:43 -- common/autotest_common.sh@352 -- # fss["$mount"]=tmpfs 00:13:29.190 04:10:43 -- common/autotest_common.sh@353 -- # avails["$mount"]=26476380160 00:13:29.190 04:10:43 -- common/autotest_common.sh@353 -- # sizes["$mount"]=26476384256 00:13:29.190 04:10:43 -- common/autotest_common.sh@354 -- # uses["$mount"]=4096 00:13:29.190 04:10:43 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:13:29.190 04:10:43 -- common/autotest_common.sh@357 -- # printf '* Looking for test storage...\n' 00:13:29.190 * Looking for test storage... 00:13:29.190 04:10:43 -- common/autotest_common.sh@359 -- # local target_space new_size 00:13:29.190 04:10:43 -- common/autotest_common.sh@360 -- # for target_dir in "${storage_candidates[@]}" 00:13:29.190 04:10:43 -- common/autotest_common.sh@363 -- # df /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target 00:13:29.190 04:10:43 -- common/autotest_common.sh@363 -- # awk '$1 !~ /Filesystem/{print $6}' 00:13:29.190 04:10:43 -- common/autotest_common.sh@363 -- # mount=/ 00:13:29.190 04:10:43 -- common/autotest_common.sh@365 -- # target_space=258691805184 00:13:29.190 04:10:43 -- common/autotest_common.sh@366 -- # (( target_space == 0 || target_space < requested_size )) 00:13:29.190 04:10:43 -- common/autotest_common.sh@369 -- # (( target_space >= requested_size )) 00:13:29.190 04:10:43 -- common/autotest_common.sh@371 -- # [[ overlay == tmpfs ]] 00:13:29.190 04:10:43 -- common/autotest_common.sh@371 -- # [[ overlay == ramfs ]] 00:13:29.190 04:10:43 -- common/autotest_common.sh@371 -- # [[ / == / ]] 00:13:29.190 04:10:43 -- common/autotest_common.sh@372 -- # new_size=8286666752 00:13:29.190 04:10:43 -- common/autotest_common.sh@373 -- # (( new_size * 100 / sizes[/] > 95 )) 00:13:29.190 04:10:43 -- common/autotest_common.sh@378 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target 00:13:29.190 04:10:43 -- common/autotest_common.sh@378 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target 00:13:29.190 04:10:43 -- common/autotest_common.sh@379 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target 00:13:29.190 * Found test storage at /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target 00:13:29.190 04:10:43 -- common/autotest_common.sh@380 -- # return 0 00:13:29.190 04:10:43 -- common/autotest_common.sh@1667 -- # set -o errtrace 00:13:29.190 04:10:43 -- common/autotest_common.sh@1668 -- # shopt -s extdebug 00:13:29.190 04:10:43 -- common/autotest_common.sh@1669 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:13:29.190 04:10:43 -- common/autotest_common.sh@1671 -- # PS4=' \t -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:13:29.190 04:10:43 -- common/autotest_common.sh@1672 -- # true 00:13:29.190 04:10:43 -- common/autotest_common.sh@1674 -- # xtrace_fd 00:13:29.190 04:10:43 -- common/autotest_common.sh@25 -- # [[ -n 14 ]] 00:13:29.190 04:10:43 -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/14 ]] 00:13:29.190 04:10:43 -- common/autotest_common.sh@27 -- # exec 00:13:29.190 04:10:43 -- common/autotest_common.sh@29 -- # exec 00:13:29.190 04:10:43 -- common/autotest_common.sh@31 -- # xtrace_restore 00:13:29.190 04:10:43 -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:13:29.190 04:10:43 -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:13:29.190 04:10:43 -- common/autotest_common.sh@18 -- # set -x 00:13:29.190 04:10:43 -- target/filesystem.sh@10 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/common.sh 00:13:29.190 04:10:43 -- nvmf/common.sh@7 -- # uname -s 00:13:29.190 04:10:43 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:29.190 04:10:43 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:29.190 04:10:43 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:29.190 04:10:43 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:29.190 04:10:43 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:29.190 04:10:43 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:29.190 04:10:43 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:29.190 04:10:43 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:29.190 04:10:43 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:29.190 04:10:43 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:29.190 04:10:43 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80ef6226-405e-ee11-906e-a4bf01973fda 00:13:29.190 04:10:43 -- nvmf/common.sh@18 -- # NVME_HOSTID=80ef6226-405e-ee11-906e-a4bf01973fda 00:13:29.190 04:10:43 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:29.190 04:10:43 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:29.190 04:10:43 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:13:29.190 04:10:43 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/common.sh 00:13:29.190 04:10:43 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:29.190 04:10:43 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:29.190 04:10:43 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:29.190 04:10:43 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:29.190 04:10:43 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:29.190 04:10:43 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:29.190 04:10:43 -- paths/export.sh@5 -- # export PATH 00:13:29.190 04:10:43 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:29.190 04:10:43 -- nvmf/common.sh@46 -- # : 0 00:13:29.190 04:10:43 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:13:29.190 04:10:43 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:13:29.191 04:10:43 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:13:29.191 04:10:43 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:29.191 04:10:43 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:29.191 04:10:43 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:13:29.191 04:10:43 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:13:29.191 04:10:43 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:13:29.191 04:10:43 -- target/filesystem.sh@12 -- # MALLOC_BDEV_SIZE=512 00:13:29.191 04:10:43 -- target/filesystem.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:13:29.191 04:10:43 -- target/filesystem.sh@15 -- # nvmftestinit 00:13:29.191 04:10:43 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:13:29.191 04:10:43 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:29.191 04:10:43 -- nvmf/common.sh@436 -- # prepare_net_devs 00:13:29.191 04:10:43 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:13:29.191 04:10:43 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:13:29.191 04:10:43 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:29.191 04:10:43 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:29.191 04:10:43 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:29.191 04:10:43 -- nvmf/common.sh@402 -- # [[ phy-fallback != virt ]] 00:13:29.191 04:10:43 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:13:29.191 04:10:43 -- nvmf/common.sh@284 -- # xtrace_disable 00:13:29.191 04:10:43 -- common/autotest_common.sh@10 -- # set +x 00:13:34.466 04:10:48 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:13:34.466 04:10:48 -- nvmf/common.sh@290 -- # pci_devs=() 00:13:34.467 04:10:48 -- nvmf/common.sh@290 -- # local -a pci_devs 00:13:34.467 04:10:48 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:13:34.467 04:10:48 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:13:34.467 04:10:48 -- nvmf/common.sh@292 -- # pci_drivers=() 00:13:34.467 04:10:48 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:13:34.467 04:10:48 -- nvmf/common.sh@294 -- # net_devs=() 00:13:34.467 04:10:48 -- nvmf/common.sh@294 -- # local -ga net_devs 00:13:34.467 04:10:48 -- nvmf/common.sh@295 -- # e810=() 00:13:34.467 04:10:48 -- nvmf/common.sh@295 -- # local -ga e810 00:13:34.467 04:10:48 -- nvmf/common.sh@296 -- # x722=() 00:13:34.467 04:10:48 -- nvmf/common.sh@296 -- # local -ga x722 00:13:34.467 04:10:48 -- nvmf/common.sh@297 -- # mlx=() 00:13:34.467 04:10:48 -- nvmf/common.sh@297 -- # local -ga mlx 00:13:34.467 04:10:48 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:34.467 04:10:48 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:34.467 04:10:48 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:34.467 04:10:48 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:34.467 04:10:48 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:34.467 04:10:48 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:34.467 04:10:48 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:34.467 04:10:48 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:34.467 04:10:48 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:34.467 04:10:48 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:34.467 04:10:48 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:34.467 04:10:48 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:13:34.467 04:10:48 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:13:34.467 04:10:48 -- nvmf/common.sh@326 -- # [[ '' == mlx5 ]] 00:13:34.467 04:10:48 -- nvmf/common.sh@328 -- # [[ '' == e810 ]] 00:13:34.467 04:10:48 -- nvmf/common.sh@330 -- # [[ '' == x722 ]] 00:13:34.467 04:10:48 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:13:34.467 04:10:48 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:13:34.467 04:10:48 -- nvmf/common.sh@340 -- # echo 'Found 0000:27:00.0 (0x8086 - 0x159b)' 00:13:34.467 Found 0000:27:00.0 (0x8086 - 0x159b) 00:13:34.467 04:10:48 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:13:34.467 04:10:48 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:13:34.467 04:10:48 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:34.467 04:10:48 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:34.467 04:10:48 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:13:34.467 04:10:48 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:13:34.467 04:10:48 -- nvmf/common.sh@340 -- # echo 'Found 0000:27:00.1 (0x8086 - 0x159b)' 00:13:34.467 Found 0000:27:00.1 (0x8086 - 0x159b) 00:13:34.467 04:10:48 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:13:34.467 04:10:48 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:13:34.467 04:10:48 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:34.467 04:10:48 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:34.467 04:10:48 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:13:34.467 04:10:48 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:13:34.467 04:10:48 -- nvmf/common.sh@371 -- # [[ '' == e810 ]] 00:13:34.467 04:10:48 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:13:34.467 04:10:48 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:34.467 04:10:48 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:13:34.467 04:10:48 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:34.467 04:10:48 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:27:00.0: cvl_0_0' 00:13:34.467 Found net devices under 0000:27:00.0: cvl_0_0 00:13:34.467 04:10:48 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:13:34.467 04:10:48 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:13:34.467 04:10:48 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:34.467 04:10:48 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:13:34.467 04:10:48 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:34.467 04:10:48 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:27:00.1: cvl_0_1' 00:13:34.467 Found net devices under 0000:27:00.1: cvl_0_1 00:13:34.467 04:10:48 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:13:34.467 04:10:48 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:13:34.467 04:10:48 -- nvmf/common.sh@402 -- # is_hw=yes 00:13:34.467 04:10:49 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:13:34.467 04:10:49 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:13:34.467 04:10:49 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:13:34.467 04:10:49 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:34.467 04:10:49 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:34.467 04:10:49 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:34.467 04:10:49 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:13:34.467 04:10:49 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:34.467 04:10:49 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:34.467 04:10:49 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:13:34.467 04:10:49 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:34.467 04:10:49 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:34.467 04:10:49 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:13:34.467 04:10:49 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:13:34.467 04:10:49 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:13:34.467 04:10:49 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:34.726 04:10:49 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:34.726 04:10:49 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:34.726 04:10:49 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:13:34.727 04:10:49 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:34.727 04:10:49 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:34.727 04:10:49 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:34.727 04:10:49 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:13:34.727 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:34.727 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.395 ms 00:13:34.727 00:13:34.727 --- 10.0.0.2 ping statistics --- 00:13:34.727 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:34.727 rtt min/avg/max/mdev = 0.395/0.395/0.395/0.000 ms 00:13:34.727 04:10:49 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:34.727 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:34.727 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.373 ms 00:13:34.727 00:13:34.727 --- 10.0.0.1 ping statistics --- 00:13:34.727 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:34.727 rtt min/avg/max/mdev = 0.373/0.373/0.373/0.000 ms 00:13:34.727 04:10:49 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:34.727 04:10:49 -- nvmf/common.sh@410 -- # return 0 00:13:34.727 04:10:49 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:13:34.727 04:10:49 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:34.727 04:10:49 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:13:34.727 04:10:49 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:13:34.727 04:10:49 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:34.727 04:10:49 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:13:34.727 04:10:49 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:13:34.727 04:10:49 -- target/filesystem.sh@105 -- # run_test nvmf_filesystem_no_in_capsule nvmf_filesystem_part 0 00:13:34.727 04:10:49 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:13:34.727 04:10:49 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:13:34.727 04:10:49 -- common/autotest_common.sh@10 -- # set +x 00:13:34.727 ************************************ 00:13:34.727 START TEST nvmf_filesystem_no_in_capsule 00:13:34.727 ************************************ 00:13:34.727 04:10:49 -- common/autotest_common.sh@1104 -- # nvmf_filesystem_part 0 00:13:34.727 04:10:49 -- target/filesystem.sh@47 -- # in_capsule=0 00:13:34.727 04:10:49 -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:13:34.727 04:10:49 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:13:34.727 04:10:49 -- common/autotest_common.sh@712 -- # xtrace_disable 00:13:34.727 04:10:49 -- common/autotest_common.sh@10 -- # set +x 00:13:34.727 04:10:49 -- nvmf/common.sh@469 -- # nvmfpid=3908624 00:13:34.727 04:10:49 -- nvmf/common.sh@470 -- # waitforlisten 3908624 00:13:34.727 04:10:49 -- common/autotest_common.sh@819 -- # '[' -z 3908624 ']' 00:13:34.727 04:10:49 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:13:34.727 04:10:49 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:34.727 04:10:49 -- common/autotest_common.sh@824 -- # local max_retries=100 00:13:34.727 04:10:49 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:34.727 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:34.727 04:10:49 -- common/autotest_common.sh@828 -- # xtrace_disable 00:13:34.727 04:10:49 -- common/autotest_common.sh@10 -- # set +x 00:13:34.985 [2024-05-14 04:10:49.354803] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:13:34.985 [2024-05-14 04:10:49.354969] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:34.985 EAL: No free 2048 kB hugepages reported on node 1 00:13:34.985 [2024-05-14 04:10:49.475231] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:34.985 [2024-05-14 04:10:49.567946] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:13:34.985 [2024-05-14 04:10:49.568108] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:34.985 [2024-05-14 04:10:49.568122] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:34.985 [2024-05-14 04:10:49.568131] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:34.985 [2024-05-14 04:10:49.568243] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:34.985 [2024-05-14 04:10:49.568305] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:13:34.985 [2024-05-14 04:10:49.568415] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:34.985 [2024-05-14 04:10:49.568425] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:13:35.554 04:10:50 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:13:35.554 04:10:50 -- common/autotest_common.sh@852 -- # return 0 00:13:35.554 04:10:50 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:13:35.554 04:10:50 -- common/autotest_common.sh@718 -- # xtrace_disable 00:13:35.554 04:10:50 -- common/autotest_common.sh@10 -- # set +x 00:13:35.554 04:10:50 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:35.554 04:10:50 -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:13:35.554 04:10:50 -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:13:35.554 04:10:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:35.554 04:10:50 -- common/autotest_common.sh@10 -- # set +x 00:13:35.554 [2024-05-14 04:10:50.099296] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:35.554 04:10:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:35.554 04:10:50 -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:13:35.554 04:10:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:35.554 04:10:50 -- common/autotest_common.sh@10 -- # set +x 00:13:35.815 Malloc1 00:13:35.815 04:10:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:35.815 04:10:50 -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:13:35.815 04:10:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:35.815 04:10:50 -- common/autotest_common.sh@10 -- # set +x 00:13:35.815 04:10:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:35.815 04:10:50 -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:35.815 04:10:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:35.815 04:10:50 -- common/autotest_common.sh@10 -- # set +x 00:13:35.815 04:10:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:35.815 04:10:50 -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:35.815 04:10:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:35.815 04:10:50 -- common/autotest_common.sh@10 -- # set +x 00:13:35.815 [2024-05-14 04:10:50.371625] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:35.815 04:10:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:35.815 04:10:50 -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:13:35.815 04:10:50 -- common/autotest_common.sh@1357 -- # local bdev_name=Malloc1 00:13:35.815 04:10:50 -- common/autotest_common.sh@1358 -- # local bdev_info 00:13:35.815 04:10:50 -- common/autotest_common.sh@1359 -- # local bs 00:13:35.815 04:10:50 -- common/autotest_common.sh@1360 -- # local nb 00:13:35.815 04:10:50 -- common/autotest_common.sh@1361 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:13:35.815 04:10:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:35.815 04:10:50 -- common/autotest_common.sh@10 -- # set +x 00:13:35.815 04:10:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:35.815 04:10:50 -- common/autotest_common.sh@1361 -- # bdev_info='[ 00:13:35.815 { 00:13:35.815 "name": "Malloc1", 00:13:35.815 "aliases": [ 00:13:35.815 "59077a99-d923-49ab-8c73-80367c70d90d" 00:13:35.815 ], 00:13:35.815 "product_name": "Malloc disk", 00:13:35.815 "block_size": 512, 00:13:35.815 "num_blocks": 1048576, 00:13:35.815 "uuid": "59077a99-d923-49ab-8c73-80367c70d90d", 00:13:35.815 "assigned_rate_limits": { 00:13:35.815 "rw_ios_per_sec": 0, 00:13:35.815 "rw_mbytes_per_sec": 0, 00:13:35.815 "r_mbytes_per_sec": 0, 00:13:35.815 "w_mbytes_per_sec": 0 00:13:35.815 }, 00:13:35.815 "claimed": true, 00:13:35.815 "claim_type": "exclusive_write", 00:13:35.815 "zoned": false, 00:13:35.815 "supported_io_types": { 00:13:35.815 "read": true, 00:13:35.815 "write": true, 00:13:35.815 "unmap": true, 00:13:35.815 "write_zeroes": true, 00:13:35.815 "flush": true, 00:13:35.815 "reset": true, 00:13:35.815 "compare": false, 00:13:35.815 "compare_and_write": false, 00:13:35.815 "abort": true, 00:13:35.815 "nvme_admin": false, 00:13:35.815 "nvme_io": false 00:13:35.815 }, 00:13:35.815 "memory_domains": [ 00:13:35.815 { 00:13:35.815 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:35.815 "dma_device_type": 2 00:13:35.815 } 00:13:35.815 ], 00:13:35.815 "driver_specific": {} 00:13:35.815 } 00:13:35.815 ]' 00:13:35.815 04:10:50 -- common/autotest_common.sh@1362 -- # jq '.[] .block_size' 00:13:36.076 04:10:50 -- common/autotest_common.sh@1362 -- # bs=512 00:13:36.076 04:10:50 -- common/autotest_common.sh@1363 -- # jq '.[] .num_blocks' 00:13:36.076 04:10:50 -- common/autotest_common.sh@1363 -- # nb=1048576 00:13:36.076 04:10:50 -- common/autotest_common.sh@1366 -- # bdev_size=512 00:13:36.076 04:10:50 -- common/autotest_common.sh@1367 -- # echo 512 00:13:36.076 04:10:50 -- target/filesystem.sh@58 -- # malloc_size=536870912 00:13:36.076 04:10:50 -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80ef6226-405e-ee11-906e-a4bf01973fda --hostid=80ef6226-405e-ee11-906e-a4bf01973fda -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:37.462 04:10:51 -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:13:37.462 04:10:51 -- common/autotest_common.sh@1177 -- # local i=0 00:13:37.462 04:10:51 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:13:37.462 04:10:51 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:13:37.462 04:10:51 -- common/autotest_common.sh@1184 -- # sleep 2 00:13:40.008 04:10:53 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:13:40.008 04:10:53 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:13:40.008 04:10:53 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:13:40.008 04:10:53 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:13:40.008 04:10:53 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:13:40.008 04:10:53 -- common/autotest_common.sh@1187 -- # return 0 00:13:40.008 04:10:54 -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:13:40.008 04:10:54 -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:13:40.009 04:10:54 -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:13:40.009 04:10:54 -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:13:40.009 04:10:54 -- setup/common.sh@76 -- # local dev=nvme0n1 00:13:40.009 04:10:54 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:13:40.009 04:10:54 -- setup/common.sh@80 -- # echo 536870912 00:13:40.009 04:10:54 -- target/filesystem.sh@64 -- # nvme_size=536870912 00:13:40.009 04:10:54 -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:13:40.009 04:10:54 -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:13:40.009 04:10:54 -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:13:40.009 04:10:54 -- target/filesystem.sh@69 -- # partprobe 00:13:40.268 04:10:54 -- target/filesystem.sh@70 -- # sleep 1 00:13:41.210 04:10:55 -- target/filesystem.sh@76 -- # '[' 0 -eq 0 ']' 00:13:41.210 04:10:55 -- target/filesystem.sh@77 -- # run_test filesystem_ext4 nvmf_filesystem_create ext4 nvme0n1 00:13:41.210 04:10:55 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:13:41.210 04:10:55 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:13:41.210 04:10:55 -- common/autotest_common.sh@10 -- # set +x 00:13:41.210 ************************************ 00:13:41.210 START TEST filesystem_ext4 00:13:41.210 ************************************ 00:13:41.210 04:10:55 -- common/autotest_common.sh@1104 -- # nvmf_filesystem_create ext4 nvme0n1 00:13:41.210 04:10:55 -- target/filesystem.sh@18 -- # fstype=ext4 00:13:41.210 04:10:55 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:13:41.210 04:10:55 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:13:41.210 04:10:55 -- common/autotest_common.sh@902 -- # local fstype=ext4 00:13:41.210 04:10:55 -- common/autotest_common.sh@903 -- # local dev_name=/dev/nvme0n1p1 00:13:41.210 04:10:55 -- common/autotest_common.sh@904 -- # local i=0 00:13:41.210 04:10:55 -- common/autotest_common.sh@905 -- # local force 00:13:41.210 04:10:55 -- common/autotest_common.sh@907 -- # '[' ext4 = ext4 ']' 00:13:41.210 04:10:55 -- common/autotest_common.sh@908 -- # force=-F 00:13:41.210 04:10:55 -- common/autotest_common.sh@913 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:13:41.210 mke2fs 1.46.5 (30-Dec-2021) 00:13:41.471 Discarding device blocks: 0/522240 done 00:13:41.471 Creating filesystem with 522240 1k blocks and 130560 inodes 00:13:41.471 Filesystem UUID: 334e24fe-794d-4a59-b350-61aa3234ce3d 00:13:41.471 Superblock backups stored on blocks: 00:13:41.471 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:13:41.471 00:13:41.471 Allocating group tables: 0/64 done 00:13:41.471 Writing inode tables: 0/64 done 00:13:41.471 Creating journal (8192 blocks): done 00:13:42.298 Writing superblocks and filesystem accounting information: 0/64 done 00:13:42.298 00:13:42.298 04:10:56 -- common/autotest_common.sh@921 -- # return 0 00:13:42.298 04:10:56 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:13:42.866 04:10:57 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:13:42.866 04:10:57 -- target/filesystem.sh@25 -- # sync 00:13:42.866 04:10:57 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:13:42.866 04:10:57 -- target/filesystem.sh@27 -- # sync 00:13:42.866 04:10:57 -- target/filesystem.sh@29 -- # i=0 00:13:42.866 04:10:57 -- target/filesystem.sh@30 -- # umount /mnt/device 00:13:43.127 04:10:57 -- target/filesystem.sh@37 -- # kill -0 3908624 00:13:43.127 04:10:57 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:13:43.127 04:10:57 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:13:43.127 04:10:57 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:13:43.127 04:10:57 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:13:43.127 00:13:43.127 real 0m1.720s 00:13:43.127 user 0m0.023s 00:13:43.127 sys 0m0.037s 00:13:43.127 04:10:57 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:43.127 04:10:57 -- common/autotest_common.sh@10 -- # set +x 00:13:43.127 ************************************ 00:13:43.127 END TEST filesystem_ext4 00:13:43.127 ************************************ 00:13:43.127 04:10:57 -- target/filesystem.sh@78 -- # run_test filesystem_btrfs nvmf_filesystem_create btrfs nvme0n1 00:13:43.127 04:10:57 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:13:43.127 04:10:57 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:13:43.127 04:10:57 -- common/autotest_common.sh@10 -- # set +x 00:13:43.127 ************************************ 00:13:43.127 START TEST filesystem_btrfs 00:13:43.127 ************************************ 00:13:43.127 04:10:57 -- common/autotest_common.sh@1104 -- # nvmf_filesystem_create btrfs nvme0n1 00:13:43.127 04:10:57 -- target/filesystem.sh@18 -- # fstype=btrfs 00:13:43.127 04:10:57 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:13:43.127 04:10:57 -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:13:43.127 04:10:57 -- common/autotest_common.sh@902 -- # local fstype=btrfs 00:13:43.127 04:10:57 -- common/autotest_common.sh@903 -- # local dev_name=/dev/nvme0n1p1 00:13:43.127 04:10:57 -- common/autotest_common.sh@904 -- # local i=0 00:13:43.127 04:10:57 -- common/autotest_common.sh@905 -- # local force 00:13:43.127 04:10:57 -- common/autotest_common.sh@907 -- # '[' btrfs = ext4 ']' 00:13:43.127 04:10:57 -- common/autotest_common.sh@910 -- # force=-f 00:13:43.127 04:10:57 -- common/autotest_common.sh@913 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:13:43.419 btrfs-progs v6.6.2 00:13:43.419 See https://btrfs.readthedocs.io for more information. 00:13:43.419 00:13:43.419 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:13:43.419 NOTE: several default settings have changed in version 5.15, please make sure 00:13:43.419 this does not affect your deployments: 00:13:43.419 - DUP for metadata (-m dup) 00:13:43.419 - enabled no-holes (-O no-holes) 00:13:43.419 - enabled free-space-tree (-R free-space-tree) 00:13:43.419 00:13:43.419 Label: (null) 00:13:43.419 UUID: 145551ce-edc9-4238-b512-ffb6c9571a22 00:13:43.419 Node size: 16384 00:13:43.419 Sector size: 4096 00:13:43.419 Filesystem size: 510.00MiB 00:13:43.419 Block group profiles: 00:13:43.419 Data: single 8.00MiB 00:13:43.419 Metadata: DUP 32.00MiB 00:13:43.419 System: DUP 8.00MiB 00:13:43.419 SSD detected: yes 00:13:43.419 Zoned device: no 00:13:43.419 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:13:43.419 Runtime features: free-space-tree 00:13:43.419 Checksum: crc32c 00:13:43.419 Number of devices: 1 00:13:43.419 Devices: 00:13:43.419 ID SIZE PATH 00:13:43.419 1 510.00MiB /dev/nvme0n1p1 00:13:43.419 00:13:43.419 04:10:57 -- common/autotest_common.sh@921 -- # return 0 00:13:43.419 04:10:57 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:13:43.989 04:10:58 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:13:43.989 04:10:58 -- target/filesystem.sh@25 -- # sync 00:13:43.989 04:10:58 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:13:43.989 04:10:58 -- target/filesystem.sh@27 -- # sync 00:13:43.989 04:10:58 -- target/filesystem.sh@29 -- # i=0 00:13:43.989 04:10:58 -- target/filesystem.sh@30 -- # umount /mnt/device 00:13:43.989 04:10:58 -- target/filesystem.sh@37 -- # kill -0 3908624 00:13:43.989 04:10:58 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:13:43.989 04:10:58 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:13:43.989 04:10:58 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:13:43.989 04:10:58 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:13:43.989 00:13:43.989 real 0m0.802s 00:13:43.989 user 0m0.019s 00:13:43.989 sys 0m0.055s 00:13:43.989 04:10:58 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:43.989 04:10:58 -- common/autotest_common.sh@10 -- # set +x 00:13:43.989 ************************************ 00:13:43.989 END TEST filesystem_btrfs 00:13:43.989 ************************************ 00:13:43.989 04:10:58 -- target/filesystem.sh@79 -- # run_test filesystem_xfs nvmf_filesystem_create xfs nvme0n1 00:13:43.989 04:10:58 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:13:43.989 04:10:58 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:13:43.989 04:10:58 -- common/autotest_common.sh@10 -- # set +x 00:13:43.989 ************************************ 00:13:43.989 START TEST filesystem_xfs 00:13:43.989 ************************************ 00:13:43.989 04:10:58 -- common/autotest_common.sh@1104 -- # nvmf_filesystem_create xfs nvme0n1 00:13:43.989 04:10:58 -- target/filesystem.sh@18 -- # fstype=xfs 00:13:43.989 04:10:58 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:13:43.989 04:10:58 -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:13:43.989 04:10:58 -- common/autotest_common.sh@902 -- # local fstype=xfs 00:13:43.989 04:10:58 -- common/autotest_common.sh@903 -- # local dev_name=/dev/nvme0n1p1 00:13:43.989 04:10:58 -- common/autotest_common.sh@904 -- # local i=0 00:13:43.989 04:10:58 -- common/autotest_common.sh@905 -- # local force 00:13:43.989 04:10:58 -- common/autotest_common.sh@907 -- # '[' xfs = ext4 ']' 00:13:43.989 04:10:58 -- common/autotest_common.sh@910 -- # force=-f 00:13:43.989 04:10:58 -- common/autotest_common.sh@913 -- # mkfs.xfs -f /dev/nvme0n1p1 00:13:43.989 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:13:43.989 = sectsz=512 attr=2, projid32bit=1 00:13:43.989 = crc=1 finobt=1, sparse=1, rmapbt=0 00:13:43.989 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:13:43.989 data = bsize=4096 blocks=130560, imaxpct=25 00:13:43.989 = sunit=0 swidth=0 blks 00:13:43.989 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:13:43.989 log =internal log bsize=4096 blocks=16384, version=2 00:13:43.989 = sectsz=512 sunit=0 blks, lazy-count=1 00:13:43.989 realtime =none extsz=4096 blocks=0, rtextents=0 00:13:44.933 Discarding blocks...Done. 00:13:44.933 04:10:59 -- common/autotest_common.sh@921 -- # return 0 00:13:44.933 04:10:59 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:13:46.843 04:11:01 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:13:46.843 04:11:01 -- target/filesystem.sh@25 -- # sync 00:13:46.843 04:11:01 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:13:46.843 04:11:01 -- target/filesystem.sh@27 -- # sync 00:13:46.843 04:11:01 -- target/filesystem.sh@29 -- # i=0 00:13:46.843 04:11:01 -- target/filesystem.sh@30 -- # umount /mnt/device 00:13:46.843 04:11:01 -- target/filesystem.sh@37 -- # kill -0 3908624 00:13:46.843 04:11:01 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:13:46.843 04:11:01 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:13:46.843 04:11:01 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:13:46.843 04:11:01 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:13:46.843 00:13:46.843 real 0m2.793s 00:13:46.843 user 0m0.019s 00:13:46.843 sys 0m0.053s 00:13:46.843 04:11:01 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:46.843 04:11:01 -- common/autotest_common.sh@10 -- # set +x 00:13:46.843 ************************************ 00:13:46.843 END TEST filesystem_xfs 00:13:46.843 ************************************ 00:13:46.843 04:11:01 -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:13:47.103 04:11:01 -- target/filesystem.sh@93 -- # sync 00:13:47.103 04:11:01 -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:47.103 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:47.103 04:11:01 -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:47.103 04:11:01 -- common/autotest_common.sh@1198 -- # local i=0 00:13:47.103 04:11:01 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:13:47.103 04:11:01 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:47.103 04:11:01 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:47.103 04:11:01 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:13:47.103 04:11:01 -- common/autotest_common.sh@1210 -- # return 0 00:13:47.103 04:11:01 -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:47.103 04:11:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:47.103 04:11:01 -- common/autotest_common.sh@10 -- # set +x 00:13:47.103 04:11:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:47.103 04:11:01 -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:13:47.103 04:11:01 -- target/filesystem.sh@101 -- # killprocess 3908624 00:13:47.103 04:11:01 -- common/autotest_common.sh@926 -- # '[' -z 3908624 ']' 00:13:47.103 04:11:01 -- common/autotest_common.sh@930 -- # kill -0 3908624 00:13:47.103 04:11:01 -- common/autotest_common.sh@931 -- # uname 00:13:47.103 04:11:01 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:13:47.103 04:11:01 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3908624 00:13:47.103 04:11:01 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:13:47.103 04:11:01 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:13:47.103 04:11:01 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3908624' 00:13:47.103 killing process with pid 3908624 00:13:47.103 04:11:01 -- common/autotest_common.sh@945 -- # kill 3908624 00:13:47.103 04:11:01 -- common/autotest_common.sh@950 -- # wait 3908624 00:13:48.046 04:11:02 -- target/filesystem.sh@102 -- # nvmfpid= 00:13:48.046 00:13:48.046 real 0m13.342s 00:13:48.046 user 0m51.451s 00:13:48.046 sys 0m1.021s 00:13:48.046 04:11:02 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:48.046 04:11:02 -- common/autotest_common.sh@10 -- # set +x 00:13:48.046 ************************************ 00:13:48.046 END TEST nvmf_filesystem_no_in_capsule 00:13:48.046 ************************************ 00:13:48.306 04:11:02 -- target/filesystem.sh@106 -- # run_test nvmf_filesystem_in_capsule nvmf_filesystem_part 4096 00:13:48.306 04:11:02 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:13:48.306 04:11:02 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:13:48.306 04:11:02 -- common/autotest_common.sh@10 -- # set +x 00:13:48.306 ************************************ 00:13:48.306 START TEST nvmf_filesystem_in_capsule 00:13:48.306 ************************************ 00:13:48.306 04:11:02 -- common/autotest_common.sh@1104 -- # nvmf_filesystem_part 4096 00:13:48.306 04:11:02 -- target/filesystem.sh@47 -- # in_capsule=4096 00:13:48.306 04:11:02 -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:13:48.306 04:11:02 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:13:48.306 04:11:02 -- common/autotest_common.sh@712 -- # xtrace_disable 00:13:48.306 04:11:02 -- common/autotest_common.sh@10 -- # set +x 00:13:48.306 04:11:02 -- nvmf/common.sh@469 -- # nvmfpid=3911257 00:13:48.306 04:11:02 -- nvmf/common.sh@470 -- # waitforlisten 3911257 00:13:48.306 04:11:02 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:13:48.306 04:11:02 -- common/autotest_common.sh@819 -- # '[' -z 3911257 ']' 00:13:48.306 04:11:02 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:48.306 04:11:02 -- common/autotest_common.sh@824 -- # local max_retries=100 00:13:48.307 04:11:02 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:48.307 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:48.307 04:11:02 -- common/autotest_common.sh@828 -- # xtrace_disable 00:13:48.307 04:11:02 -- common/autotest_common.sh@10 -- # set +x 00:13:48.307 [2024-05-14 04:11:02.731032] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:13:48.307 [2024-05-14 04:11:02.731143] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:48.307 EAL: No free 2048 kB hugepages reported on node 1 00:13:48.307 [2024-05-14 04:11:02.834347] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:48.568 [2024-05-14 04:11:02.930897] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:13:48.568 [2024-05-14 04:11:02.931102] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:48.568 [2024-05-14 04:11:02.931119] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:48.568 [2024-05-14 04:11:02.931131] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:48.568 [2024-05-14 04:11:02.931204] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:48.568 [2024-05-14 04:11:02.931259] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:13:48.568 [2024-05-14 04:11:02.931289] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:48.568 [2024-05-14 04:11:02.931298] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:13:49.138 04:11:03 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:13:49.138 04:11:03 -- common/autotest_common.sh@852 -- # return 0 00:13:49.138 04:11:03 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:13:49.138 04:11:03 -- common/autotest_common.sh@718 -- # xtrace_disable 00:13:49.138 04:11:03 -- common/autotest_common.sh@10 -- # set +x 00:13:49.138 04:11:03 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:49.138 04:11:03 -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:13:49.138 04:11:03 -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 4096 00:13:49.138 04:11:03 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:49.138 04:11:03 -- common/autotest_common.sh@10 -- # set +x 00:13:49.138 [2024-05-14 04:11:03.483833] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:49.138 04:11:03 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:49.138 04:11:03 -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:13:49.138 04:11:03 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:49.138 04:11:03 -- common/autotest_common.sh@10 -- # set +x 00:13:49.138 Malloc1 00:13:49.138 04:11:03 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:49.138 04:11:03 -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:13:49.138 04:11:03 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:49.138 04:11:03 -- common/autotest_common.sh@10 -- # set +x 00:13:49.138 04:11:03 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:49.399 04:11:03 -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:49.399 04:11:03 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:49.399 04:11:03 -- common/autotest_common.sh@10 -- # set +x 00:13:49.399 04:11:03 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:49.399 04:11:03 -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:49.399 04:11:03 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:49.399 04:11:03 -- common/autotest_common.sh@10 -- # set +x 00:13:49.399 [2024-05-14 04:11:03.737563] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:49.399 04:11:03 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:49.399 04:11:03 -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:13:49.399 04:11:03 -- common/autotest_common.sh@1357 -- # local bdev_name=Malloc1 00:13:49.399 04:11:03 -- common/autotest_common.sh@1358 -- # local bdev_info 00:13:49.399 04:11:03 -- common/autotest_common.sh@1359 -- # local bs 00:13:49.399 04:11:03 -- common/autotest_common.sh@1360 -- # local nb 00:13:49.399 04:11:03 -- common/autotest_common.sh@1361 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:13:49.399 04:11:03 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:49.399 04:11:03 -- common/autotest_common.sh@10 -- # set +x 00:13:49.399 04:11:03 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:49.399 04:11:03 -- common/autotest_common.sh@1361 -- # bdev_info='[ 00:13:49.399 { 00:13:49.399 "name": "Malloc1", 00:13:49.399 "aliases": [ 00:13:49.399 "b7ed6c53-6eea-4287-94a7-2ddc54bbda69" 00:13:49.399 ], 00:13:49.399 "product_name": "Malloc disk", 00:13:49.399 "block_size": 512, 00:13:49.399 "num_blocks": 1048576, 00:13:49.399 "uuid": "b7ed6c53-6eea-4287-94a7-2ddc54bbda69", 00:13:49.399 "assigned_rate_limits": { 00:13:49.399 "rw_ios_per_sec": 0, 00:13:49.399 "rw_mbytes_per_sec": 0, 00:13:49.399 "r_mbytes_per_sec": 0, 00:13:49.399 "w_mbytes_per_sec": 0 00:13:49.399 }, 00:13:49.399 "claimed": true, 00:13:49.399 "claim_type": "exclusive_write", 00:13:49.399 "zoned": false, 00:13:49.399 "supported_io_types": { 00:13:49.399 "read": true, 00:13:49.399 "write": true, 00:13:49.399 "unmap": true, 00:13:49.399 "write_zeroes": true, 00:13:49.399 "flush": true, 00:13:49.399 "reset": true, 00:13:49.399 "compare": false, 00:13:49.399 "compare_and_write": false, 00:13:49.399 "abort": true, 00:13:49.399 "nvme_admin": false, 00:13:49.399 "nvme_io": false 00:13:49.399 }, 00:13:49.399 "memory_domains": [ 00:13:49.399 { 00:13:49.399 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:49.399 "dma_device_type": 2 00:13:49.399 } 00:13:49.399 ], 00:13:49.399 "driver_specific": {} 00:13:49.399 } 00:13:49.399 ]' 00:13:49.399 04:11:03 -- common/autotest_common.sh@1362 -- # jq '.[] .block_size' 00:13:49.399 04:11:03 -- common/autotest_common.sh@1362 -- # bs=512 00:13:49.399 04:11:03 -- common/autotest_common.sh@1363 -- # jq '.[] .num_blocks' 00:13:49.399 04:11:03 -- common/autotest_common.sh@1363 -- # nb=1048576 00:13:49.399 04:11:03 -- common/autotest_common.sh@1366 -- # bdev_size=512 00:13:49.399 04:11:03 -- common/autotest_common.sh@1367 -- # echo 512 00:13:49.399 04:11:03 -- target/filesystem.sh@58 -- # malloc_size=536870912 00:13:49.399 04:11:03 -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80ef6226-405e-ee11-906e-a4bf01973fda --hostid=80ef6226-405e-ee11-906e-a4bf01973fda -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:50.779 04:11:05 -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:13:50.779 04:11:05 -- common/autotest_common.sh@1177 -- # local i=0 00:13:50.779 04:11:05 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:13:50.779 04:11:05 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:13:50.779 04:11:05 -- common/autotest_common.sh@1184 -- # sleep 2 00:13:52.687 04:11:07 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:13:52.687 04:11:07 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:13:52.687 04:11:07 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:13:52.687 04:11:07 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:13:52.687 04:11:07 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:13:52.687 04:11:07 -- common/autotest_common.sh@1187 -- # return 0 00:13:52.687 04:11:07 -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:13:52.687 04:11:07 -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:13:52.687 04:11:07 -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:13:52.687 04:11:07 -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:13:52.687 04:11:07 -- setup/common.sh@76 -- # local dev=nvme0n1 00:13:52.687 04:11:07 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:13:52.687 04:11:07 -- setup/common.sh@80 -- # echo 536870912 00:13:52.687 04:11:07 -- target/filesystem.sh@64 -- # nvme_size=536870912 00:13:52.687 04:11:07 -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:13:52.687 04:11:07 -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:13:52.687 04:11:07 -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:13:52.945 04:11:07 -- target/filesystem.sh@69 -- # partprobe 00:13:53.203 04:11:07 -- target/filesystem.sh@70 -- # sleep 1 00:13:54.580 04:11:08 -- target/filesystem.sh@76 -- # '[' 4096 -eq 0 ']' 00:13:54.580 04:11:08 -- target/filesystem.sh@81 -- # run_test filesystem_in_capsule_ext4 nvmf_filesystem_create ext4 nvme0n1 00:13:54.580 04:11:08 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:13:54.580 04:11:08 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:13:54.580 04:11:08 -- common/autotest_common.sh@10 -- # set +x 00:13:54.580 ************************************ 00:13:54.580 START TEST filesystem_in_capsule_ext4 00:13:54.580 ************************************ 00:13:54.580 04:11:08 -- common/autotest_common.sh@1104 -- # nvmf_filesystem_create ext4 nvme0n1 00:13:54.580 04:11:08 -- target/filesystem.sh@18 -- # fstype=ext4 00:13:54.580 04:11:08 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:13:54.580 04:11:08 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:13:54.580 04:11:08 -- common/autotest_common.sh@902 -- # local fstype=ext4 00:13:54.580 04:11:08 -- common/autotest_common.sh@903 -- # local dev_name=/dev/nvme0n1p1 00:13:54.580 04:11:08 -- common/autotest_common.sh@904 -- # local i=0 00:13:54.580 04:11:08 -- common/autotest_common.sh@905 -- # local force 00:13:54.580 04:11:08 -- common/autotest_common.sh@907 -- # '[' ext4 = ext4 ']' 00:13:54.580 04:11:08 -- common/autotest_common.sh@908 -- # force=-F 00:13:54.580 04:11:08 -- common/autotest_common.sh@913 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:13:54.580 mke2fs 1.46.5 (30-Dec-2021) 00:13:54.580 Discarding device blocks: 0/522240 done 00:13:54.580 Creating filesystem with 522240 1k blocks and 130560 inodes 00:13:54.580 Filesystem UUID: 765d69f2-e6f2-4403-ad56-a8f4dc4290e5 00:13:54.580 Superblock backups stored on blocks: 00:13:54.580 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:13:54.580 00:13:54.580 Allocating group tables: 0/64 done 00:13:54.580 Writing inode tables: 0/64 done 00:13:54.580 Creating journal (8192 blocks): done 00:13:55.515 Writing superblocks and filesystem accounting information: 0/64 done 00:13:55.515 00:13:55.515 04:11:09 -- common/autotest_common.sh@921 -- # return 0 00:13:55.515 04:11:09 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:13:55.774 04:11:10 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:13:55.774 04:11:10 -- target/filesystem.sh@25 -- # sync 00:13:55.774 04:11:10 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:13:55.774 04:11:10 -- target/filesystem.sh@27 -- # sync 00:13:55.774 04:11:10 -- target/filesystem.sh@29 -- # i=0 00:13:55.774 04:11:10 -- target/filesystem.sh@30 -- # umount /mnt/device 00:13:55.774 04:11:10 -- target/filesystem.sh@37 -- # kill -0 3911257 00:13:55.774 04:11:10 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:13:55.774 04:11:10 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:13:55.774 04:11:10 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:13:55.774 04:11:10 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:13:55.774 00:13:55.774 real 0m1.446s 00:13:55.774 user 0m0.022s 00:13:55.774 sys 0m0.047s 00:13:55.774 04:11:10 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:55.774 04:11:10 -- common/autotest_common.sh@10 -- # set +x 00:13:55.774 ************************************ 00:13:55.774 END TEST filesystem_in_capsule_ext4 00:13:55.774 ************************************ 00:13:55.774 04:11:10 -- target/filesystem.sh@82 -- # run_test filesystem_in_capsule_btrfs nvmf_filesystem_create btrfs nvme0n1 00:13:55.774 04:11:10 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:13:55.774 04:11:10 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:13:55.774 04:11:10 -- common/autotest_common.sh@10 -- # set +x 00:13:55.774 ************************************ 00:13:55.774 START TEST filesystem_in_capsule_btrfs 00:13:55.774 ************************************ 00:13:55.774 04:11:10 -- common/autotest_common.sh@1104 -- # nvmf_filesystem_create btrfs nvme0n1 00:13:55.774 04:11:10 -- target/filesystem.sh@18 -- # fstype=btrfs 00:13:55.774 04:11:10 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:13:55.774 04:11:10 -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:13:55.774 04:11:10 -- common/autotest_common.sh@902 -- # local fstype=btrfs 00:13:55.774 04:11:10 -- common/autotest_common.sh@903 -- # local dev_name=/dev/nvme0n1p1 00:13:55.774 04:11:10 -- common/autotest_common.sh@904 -- # local i=0 00:13:55.774 04:11:10 -- common/autotest_common.sh@905 -- # local force 00:13:55.774 04:11:10 -- common/autotest_common.sh@907 -- # '[' btrfs = ext4 ']' 00:13:55.774 04:11:10 -- common/autotest_common.sh@910 -- # force=-f 00:13:55.774 04:11:10 -- common/autotest_common.sh@913 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:13:56.034 btrfs-progs v6.6.2 00:13:56.034 See https://btrfs.readthedocs.io for more information. 00:13:56.034 00:13:56.034 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:13:56.034 NOTE: several default settings have changed in version 5.15, please make sure 00:13:56.034 this does not affect your deployments: 00:13:56.034 - DUP for metadata (-m dup) 00:13:56.034 - enabled no-holes (-O no-holes) 00:13:56.034 - enabled free-space-tree (-R free-space-tree) 00:13:56.034 00:13:56.034 Label: (null) 00:13:56.034 UUID: 4d399c2c-e857-4ebc-a9c9-1e183f9aa30e 00:13:56.034 Node size: 16384 00:13:56.034 Sector size: 4096 00:13:56.034 Filesystem size: 510.00MiB 00:13:56.034 Block group profiles: 00:13:56.034 Data: single 8.00MiB 00:13:56.034 Metadata: DUP 32.00MiB 00:13:56.034 System: DUP 8.00MiB 00:13:56.034 SSD detected: yes 00:13:56.034 Zoned device: no 00:13:56.034 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:13:56.034 Runtime features: free-space-tree 00:13:56.034 Checksum: crc32c 00:13:56.034 Number of devices: 1 00:13:56.034 Devices: 00:13:56.034 ID SIZE PATH 00:13:56.034 1 510.00MiB /dev/nvme0n1p1 00:13:56.034 00:13:56.034 04:11:10 -- common/autotest_common.sh@921 -- # return 0 00:13:56.034 04:11:10 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:13:56.294 04:11:10 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:13:56.294 04:11:10 -- target/filesystem.sh@25 -- # sync 00:13:56.294 04:11:10 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:13:56.294 04:11:10 -- target/filesystem.sh@27 -- # sync 00:13:56.294 04:11:10 -- target/filesystem.sh@29 -- # i=0 00:13:56.294 04:11:10 -- target/filesystem.sh@30 -- # umount /mnt/device 00:13:56.294 04:11:10 -- target/filesystem.sh@37 -- # kill -0 3911257 00:13:56.294 04:11:10 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:13:56.294 04:11:10 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:13:56.294 04:11:10 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:13:56.294 04:11:10 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:13:56.294 00:13:56.294 real 0m0.498s 00:13:56.294 user 0m0.014s 00:13:56.294 sys 0m0.065s 00:13:56.294 04:11:10 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:56.294 04:11:10 -- common/autotest_common.sh@10 -- # set +x 00:13:56.294 ************************************ 00:13:56.294 END TEST filesystem_in_capsule_btrfs 00:13:56.294 ************************************ 00:13:56.294 04:11:10 -- target/filesystem.sh@83 -- # run_test filesystem_in_capsule_xfs nvmf_filesystem_create xfs nvme0n1 00:13:56.294 04:11:10 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:13:56.294 04:11:10 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:13:56.294 04:11:10 -- common/autotest_common.sh@10 -- # set +x 00:13:56.294 ************************************ 00:13:56.294 START TEST filesystem_in_capsule_xfs 00:13:56.294 ************************************ 00:13:56.294 04:11:10 -- common/autotest_common.sh@1104 -- # nvmf_filesystem_create xfs nvme0n1 00:13:56.294 04:11:10 -- target/filesystem.sh@18 -- # fstype=xfs 00:13:56.294 04:11:10 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:13:56.294 04:11:10 -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:13:56.294 04:11:10 -- common/autotest_common.sh@902 -- # local fstype=xfs 00:13:56.294 04:11:10 -- common/autotest_common.sh@903 -- # local dev_name=/dev/nvme0n1p1 00:13:56.294 04:11:10 -- common/autotest_common.sh@904 -- # local i=0 00:13:56.294 04:11:10 -- common/autotest_common.sh@905 -- # local force 00:13:56.294 04:11:10 -- common/autotest_common.sh@907 -- # '[' xfs = ext4 ']' 00:13:56.294 04:11:10 -- common/autotest_common.sh@910 -- # force=-f 00:13:56.294 04:11:10 -- common/autotest_common.sh@913 -- # mkfs.xfs -f /dev/nvme0n1p1 00:13:56.294 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:13:56.294 = sectsz=512 attr=2, projid32bit=1 00:13:56.294 = crc=1 finobt=1, sparse=1, rmapbt=0 00:13:56.294 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:13:56.294 data = bsize=4096 blocks=130560, imaxpct=25 00:13:56.294 = sunit=0 swidth=0 blks 00:13:56.294 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:13:56.294 log =internal log bsize=4096 blocks=16384, version=2 00:13:56.294 = sectsz=512 sunit=0 blks, lazy-count=1 00:13:56.294 realtime =none extsz=4096 blocks=0, rtextents=0 00:13:57.237 Discarding blocks...Done. 00:13:57.237 04:11:11 -- common/autotest_common.sh@921 -- # return 0 00:13:57.237 04:11:11 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:13:59.150 04:11:13 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:13:59.150 04:11:13 -- target/filesystem.sh@25 -- # sync 00:13:59.150 04:11:13 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:13:59.150 04:11:13 -- target/filesystem.sh@27 -- # sync 00:13:59.150 04:11:13 -- target/filesystem.sh@29 -- # i=0 00:13:59.150 04:11:13 -- target/filesystem.sh@30 -- # umount /mnt/device 00:13:59.150 04:11:13 -- target/filesystem.sh@37 -- # kill -0 3911257 00:13:59.150 04:11:13 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:13:59.150 04:11:13 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:13:59.150 04:11:13 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:13:59.150 04:11:13 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:13:59.150 00:13:59.150 real 0m2.630s 00:13:59.150 user 0m0.018s 00:13:59.150 sys 0m0.053s 00:13:59.150 04:11:13 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:59.150 04:11:13 -- common/autotest_common.sh@10 -- # set +x 00:13:59.150 ************************************ 00:13:59.150 END TEST filesystem_in_capsule_xfs 00:13:59.150 ************************************ 00:13:59.150 04:11:13 -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:13:59.150 04:11:13 -- target/filesystem.sh@93 -- # sync 00:13:59.150 04:11:13 -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:59.150 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:59.150 04:11:13 -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:59.150 04:11:13 -- common/autotest_common.sh@1198 -- # local i=0 00:13:59.150 04:11:13 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:13:59.150 04:11:13 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:59.150 04:11:13 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:13:59.150 04:11:13 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:59.150 04:11:13 -- common/autotest_common.sh@1210 -- # return 0 00:13:59.150 04:11:13 -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:59.150 04:11:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:59.150 04:11:13 -- common/autotest_common.sh@10 -- # set +x 00:13:59.150 04:11:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:59.150 04:11:13 -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:13:59.150 04:11:13 -- target/filesystem.sh@101 -- # killprocess 3911257 00:13:59.150 04:11:13 -- common/autotest_common.sh@926 -- # '[' -z 3911257 ']' 00:13:59.150 04:11:13 -- common/autotest_common.sh@930 -- # kill -0 3911257 00:13:59.150 04:11:13 -- common/autotest_common.sh@931 -- # uname 00:13:59.150 04:11:13 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:13:59.150 04:11:13 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3911257 00:13:59.409 04:11:13 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:13:59.410 04:11:13 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:13:59.410 04:11:13 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3911257' 00:13:59.410 killing process with pid 3911257 00:13:59.410 04:11:13 -- common/autotest_common.sh@945 -- # kill 3911257 00:13:59.410 04:11:13 -- common/autotest_common.sh@950 -- # wait 3911257 00:14:00.351 04:11:14 -- target/filesystem.sh@102 -- # nvmfpid= 00:14:00.351 00:14:00.351 real 0m12.013s 00:14:00.351 user 0m46.257s 00:14:00.351 sys 0m1.050s 00:14:00.351 04:11:14 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:00.351 04:11:14 -- common/autotest_common.sh@10 -- # set +x 00:14:00.351 ************************************ 00:14:00.351 END TEST nvmf_filesystem_in_capsule 00:14:00.351 ************************************ 00:14:00.351 04:11:14 -- target/filesystem.sh@108 -- # nvmftestfini 00:14:00.351 04:11:14 -- nvmf/common.sh@476 -- # nvmfcleanup 00:14:00.351 04:11:14 -- nvmf/common.sh@116 -- # sync 00:14:00.351 04:11:14 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:14:00.351 04:11:14 -- nvmf/common.sh@119 -- # set +e 00:14:00.351 04:11:14 -- nvmf/common.sh@120 -- # for i in {1..20} 00:14:00.351 04:11:14 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:14:00.351 rmmod nvme_tcp 00:14:00.351 rmmod nvme_fabrics 00:14:00.351 rmmod nvme_keyring 00:14:00.351 04:11:14 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:14:00.351 04:11:14 -- nvmf/common.sh@123 -- # set -e 00:14:00.351 04:11:14 -- nvmf/common.sh@124 -- # return 0 00:14:00.351 04:11:14 -- nvmf/common.sh@477 -- # '[' -n '' ']' 00:14:00.351 04:11:14 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:14:00.351 04:11:14 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:14:00.351 04:11:14 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:14:00.351 04:11:14 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:00.351 04:11:14 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:14:00.351 04:11:14 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:00.351 04:11:14 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:00.351 04:11:14 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:02.295 04:11:16 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:14:02.295 00:14:02.295 real 0m33.348s 00:14:02.295 user 1m39.392s 00:14:02.295 sys 0m6.318s 00:14:02.295 04:11:16 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:02.295 04:11:16 -- common/autotest_common.sh@10 -- # set +x 00:14:02.295 ************************************ 00:14:02.295 END TEST nvmf_filesystem 00:14:02.295 ************************************ 00:14:02.295 04:11:16 -- nvmf/nvmf.sh@25 -- # run_test nvmf_discovery /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:14:02.295 04:11:16 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:14:02.295 04:11:16 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:14:02.295 04:11:16 -- common/autotest_common.sh@10 -- # set +x 00:14:02.295 ************************************ 00:14:02.295 START TEST nvmf_discovery 00:14:02.295 ************************************ 00:14:02.295 04:11:16 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:14:02.555 * Looking for test storage... 00:14:02.555 * Found test storage at /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target 00:14:02.555 04:11:16 -- target/discovery.sh@9 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/common.sh 00:14:02.555 04:11:16 -- nvmf/common.sh@7 -- # uname -s 00:14:02.555 04:11:16 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:02.555 04:11:16 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:02.555 04:11:16 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:02.555 04:11:16 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:02.555 04:11:16 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:02.555 04:11:16 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:02.555 04:11:16 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:02.555 04:11:16 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:02.555 04:11:16 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:02.555 04:11:16 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:02.555 04:11:16 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80ef6226-405e-ee11-906e-a4bf01973fda 00:14:02.555 04:11:16 -- nvmf/common.sh@18 -- # NVME_HOSTID=80ef6226-405e-ee11-906e-a4bf01973fda 00:14:02.555 04:11:16 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:02.555 04:11:16 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:02.555 04:11:16 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:14:02.555 04:11:16 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/common.sh 00:14:02.555 04:11:16 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:02.555 04:11:16 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:02.555 04:11:16 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:02.555 04:11:16 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:02.555 04:11:16 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:02.555 04:11:16 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:02.555 04:11:16 -- paths/export.sh@5 -- # export PATH 00:14:02.555 04:11:16 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:02.555 04:11:16 -- nvmf/common.sh@46 -- # : 0 00:14:02.555 04:11:16 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:14:02.555 04:11:16 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:14:02.555 04:11:16 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:14:02.555 04:11:16 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:02.555 04:11:16 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:02.555 04:11:16 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:14:02.555 04:11:16 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:14:02.555 04:11:16 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:14:02.555 04:11:16 -- target/discovery.sh@11 -- # NULL_BDEV_SIZE=102400 00:14:02.555 04:11:16 -- target/discovery.sh@12 -- # NULL_BLOCK_SIZE=512 00:14:02.555 04:11:16 -- target/discovery.sh@13 -- # NVMF_PORT_REFERRAL=4430 00:14:02.555 04:11:16 -- target/discovery.sh@15 -- # hash nvme 00:14:02.555 04:11:16 -- target/discovery.sh@20 -- # nvmftestinit 00:14:02.555 04:11:16 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:14:02.555 04:11:16 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:02.555 04:11:16 -- nvmf/common.sh@436 -- # prepare_net_devs 00:14:02.555 04:11:16 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:14:02.555 04:11:16 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:14:02.555 04:11:16 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:02.555 04:11:16 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:02.555 04:11:16 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:02.555 04:11:16 -- nvmf/common.sh@402 -- # [[ phy-fallback != virt ]] 00:14:02.555 04:11:16 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:14:02.555 04:11:16 -- nvmf/common.sh@284 -- # xtrace_disable 00:14:02.555 04:11:16 -- common/autotest_common.sh@10 -- # set +x 00:14:09.128 04:11:22 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:14:09.128 04:11:22 -- nvmf/common.sh@290 -- # pci_devs=() 00:14:09.128 04:11:22 -- nvmf/common.sh@290 -- # local -a pci_devs 00:14:09.128 04:11:22 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:14:09.128 04:11:22 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:14:09.128 04:11:22 -- nvmf/common.sh@292 -- # pci_drivers=() 00:14:09.128 04:11:22 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:14:09.128 04:11:22 -- nvmf/common.sh@294 -- # net_devs=() 00:14:09.128 04:11:22 -- nvmf/common.sh@294 -- # local -ga net_devs 00:14:09.128 04:11:22 -- nvmf/common.sh@295 -- # e810=() 00:14:09.128 04:11:22 -- nvmf/common.sh@295 -- # local -ga e810 00:14:09.128 04:11:22 -- nvmf/common.sh@296 -- # x722=() 00:14:09.128 04:11:22 -- nvmf/common.sh@296 -- # local -ga x722 00:14:09.128 04:11:22 -- nvmf/common.sh@297 -- # mlx=() 00:14:09.128 04:11:22 -- nvmf/common.sh@297 -- # local -ga mlx 00:14:09.128 04:11:22 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:09.128 04:11:22 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:09.128 04:11:22 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:09.128 04:11:22 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:09.128 04:11:22 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:09.128 04:11:22 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:09.128 04:11:22 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:09.128 04:11:22 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:09.128 04:11:22 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:09.128 04:11:22 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:09.128 04:11:22 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:09.128 04:11:22 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:14:09.128 04:11:22 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:14:09.128 04:11:22 -- nvmf/common.sh@326 -- # [[ '' == mlx5 ]] 00:14:09.128 04:11:22 -- nvmf/common.sh@328 -- # [[ '' == e810 ]] 00:14:09.128 04:11:22 -- nvmf/common.sh@330 -- # [[ '' == x722 ]] 00:14:09.128 04:11:22 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:14:09.128 04:11:22 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:14:09.128 04:11:22 -- nvmf/common.sh@340 -- # echo 'Found 0000:27:00.0 (0x8086 - 0x159b)' 00:14:09.128 Found 0000:27:00.0 (0x8086 - 0x159b) 00:14:09.128 04:11:22 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:14:09.128 04:11:22 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:14:09.128 04:11:22 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:09.128 04:11:22 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:09.128 04:11:22 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:14:09.128 04:11:22 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:14:09.128 04:11:22 -- nvmf/common.sh@340 -- # echo 'Found 0000:27:00.1 (0x8086 - 0x159b)' 00:14:09.128 Found 0000:27:00.1 (0x8086 - 0x159b) 00:14:09.128 04:11:22 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:14:09.128 04:11:22 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:14:09.128 04:11:22 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:09.128 04:11:22 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:09.128 04:11:22 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:14:09.128 04:11:22 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:14:09.128 04:11:22 -- nvmf/common.sh@371 -- # [[ '' == e810 ]] 00:14:09.128 04:11:22 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:14:09.128 04:11:22 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:09.128 04:11:22 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:14:09.128 04:11:22 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:09.128 04:11:22 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:27:00.0: cvl_0_0' 00:14:09.128 Found net devices under 0000:27:00.0: cvl_0_0 00:14:09.128 04:11:22 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:14:09.128 04:11:22 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:14:09.128 04:11:22 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:09.128 04:11:22 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:14:09.128 04:11:22 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:09.128 04:11:22 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:27:00.1: cvl_0_1' 00:14:09.128 Found net devices under 0000:27:00.1: cvl_0_1 00:14:09.128 04:11:22 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:14:09.128 04:11:22 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:14:09.128 04:11:22 -- nvmf/common.sh@402 -- # is_hw=yes 00:14:09.128 04:11:22 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:14:09.128 04:11:22 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:14:09.128 04:11:22 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:14:09.128 04:11:22 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:09.128 04:11:22 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:09.128 04:11:22 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:09.128 04:11:22 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:14:09.128 04:11:22 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:09.128 04:11:22 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:09.128 04:11:22 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:14:09.128 04:11:22 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:09.128 04:11:22 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:09.128 04:11:22 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:14:09.128 04:11:22 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:14:09.128 04:11:22 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:14:09.128 04:11:22 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:09.128 04:11:22 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:09.128 04:11:22 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:09.128 04:11:22 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:14:09.128 04:11:22 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:09.128 04:11:22 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:09.128 04:11:22 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:09.128 04:11:22 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:14:09.128 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:09.128 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.306 ms 00:14:09.128 00:14:09.128 --- 10.0.0.2 ping statistics --- 00:14:09.128 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:09.128 rtt min/avg/max/mdev = 0.306/0.306/0.306/0.000 ms 00:14:09.128 04:11:22 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:09.128 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:09.128 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.072 ms 00:14:09.128 00:14:09.128 --- 10.0.0.1 ping statistics --- 00:14:09.128 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:09.128 rtt min/avg/max/mdev = 0.072/0.072/0.072/0.000 ms 00:14:09.128 04:11:22 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:09.128 04:11:22 -- nvmf/common.sh@410 -- # return 0 00:14:09.128 04:11:22 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:14:09.128 04:11:22 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:09.128 04:11:22 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:14:09.128 04:11:22 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:14:09.128 04:11:22 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:09.128 04:11:22 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:14:09.128 04:11:22 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:14:09.128 04:11:22 -- target/discovery.sh@21 -- # nvmfappstart -m 0xF 00:14:09.128 04:11:22 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:14:09.128 04:11:22 -- common/autotest_common.sh@712 -- # xtrace_disable 00:14:09.128 04:11:22 -- common/autotest_common.sh@10 -- # set +x 00:14:09.128 04:11:22 -- nvmf/common.sh@469 -- # nvmfpid=3917773 00:14:09.128 04:11:22 -- nvmf/common.sh@470 -- # waitforlisten 3917773 00:14:09.128 04:11:22 -- common/autotest_common.sh@819 -- # '[' -z 3917773 ']' 00:14:09.128 04:11:22 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:09.128 04:11:22 -- common/autotest_common.sh@824 -- # local max_retries=100 00:14:09.128 04:11:22 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:09.128 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:09.128 04:11:22 -- common/autotest_common.sh@828 -- # xtrace_disable 00:14:09.128 04:11:22 -- common/autotest_common.sh@10 -- # set +x 00:14:09.128 04:11:22 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:14:09.128 [2024-05-14 04:11:22.708748] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:14:09.128 [2024-05-14 04:11:22.708850] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:09.128 EAL: No free 2048 kB hugepages reported on node 1 00:14:09.128 [2024-05-14 04:11:22.828751] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:09.128 [2024-05-14 04:11:22.924654] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:14:09.128 [2024-05-14 04:11:22.924822] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:09.128 [2024-05-14 04:11:22.924835] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:09.128 [2024-05-14 04:11:22.924844] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:09.128 [2024-05-14 04:11:22.924997] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:09.128 [2024-05-14 04:11:22.925022] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:14:09.128 [2024-05-14 04:11:22.925122] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:09.128 [2024-05-14 04:11:22.925133] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:14:09.128 04:11:23 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:14:09.128 04:11:23 -- common/autotest_common.sh@852 -- # return 0 00:14:09.128 04:11:23 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:14:09.128 04:11:23 -- common/autotest_common.sh@718 -- # xtrace_disable 00:14:09.128 04:11:23 -- common/autotest_common.sh@10 -- # set +x 00:14:09.128 04:11:23 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:09.128 04:11:23 -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:09.128 04:11:23 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:09.128 04:11:23 -- common/autotest_common.sh@10 -- # set +x 00:14:09.128 [2024-05-14 04:11:23.467681] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:09.128 04:11:23 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:09.128 04:11:23 -- target/discovery.sh@26 -- # seq 1 4 00:14:09.128 04:11:23 -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:14:09.129 04:11:23 -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null1 102400 512 00:14:09.129 04:11:23 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:09.129 04:11:23 -- common/autotest_common.sh@10 -- # set +x 00:14:09.129 Null1 00:14:09.129 04:11:23 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:09.129 04:11:23 -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:14:09.129 04:11:23 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:09.129 04:11:23 -- common/autotest_common.sh@10 -- # set +x 00:14:09.129 04:11:23 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:09.129 04:11:23 -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1 00:14:09.129 04:11:23 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:09.129 04:11:23 -- common/autotest_common.sh@10 -- # set +x 00:14:09.129 04:11:23 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:09.129 04:11:23 -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:09.129 04:11:23 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:09.129 04:11:23 -- common/autotest_common.sh@10 -- # set +x 00:14:09.129 [2024-05-14 04:11:23.511947] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:09.129 04:11:23 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:09.129 04:11:23 -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:14:09.129 04:11:23 -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null2 102400 512 00:14:09.129 04:11:23 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:09.129 04:11:23 -- common/autotest_common.sh@10 -- # set +x 00:14:09.129 Null2 00:14:09.129 04:11:23 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:09.129 04:11:23 -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:14:09.129 04:11:23 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:09.129 04:11:23 -- common/autotest_common.sh@10 -- # set +x 00:14:09.129 04:11:23 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:09.129 04:11:23 -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Null2 00:14:09.129 04:11:23 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:09.129 04:11:23 -- common/autotest_common.sh@10 -- # set +x 00:14:09.129 04:11:23 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:09.129 04:11:23 -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:14:09.129 04:11:23 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:09.129 04:11:23 -- common/autotest_common.sh@10 -- # set +x 00:14:09.129 04:11:23 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:09.129 04:11:23 -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:14:09.129 04:11:23 -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null3 102400 512 00:14:09.129 04:11:23 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:09.129 04:11:23 -- common/autotest_common.sh@10 -- # set +x 00:14:09.129 Null3 00:14:09.129 04:11:23 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:09.129 04:11:23 -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:14:09.129 04:11:23 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:09.129 04:11:23 -- common/autotest_common.sh@10 -- # set +x 00:14:09.129 04:11:23 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:09.129 04:11:23 -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Null3 00:14:09.129 04:11:23 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:09.129 04:11:23 -- common/autotest_common.sh@10 -- # set +x 00:14:09.129 04:11:23 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:09.129 04:11:23 -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:14:09.129 04:11:23 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:09.129 04:11:23 -- common/autotest_common.sh@10 -- # set +x 00:14:09.129 04:11:23 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:09.129 04:11:23 -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:14:09.129 04:11:23 -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null4 102400 512 00:14:09.129 04:11:23 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:09.129 04:11:23 -- common/autotest_common.sh@10 -- # set +x 00:14:09.129 Null4 00:14:09.129 04:11:23 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:09.129 04:11:23 -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:14:09.129 04:11:23 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:09.129 04:11:23 -- common/autotest_common.sh@10 -- # set +x 00:14:09.129 04:11:23 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:09.129 04:11:23 -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Null4 00:14:09.129 04:11:23 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:09.129 04:11:23 -- common/autotest_common.sh@10 -- # set +x 00:14:09.129 04:11:23 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:09.129 04:11:23 -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:14:09.129 04:11:23 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:09.129 04:11:23 -- common/autotest_common.sh@10 -- # set +x 00:14:09.129 04:11:23 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:09.129 04:11:23 -- target/discovery.sh@32 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:14:09.129 04:11:23 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:09.129 04:11:23 -- common/autotest_common.sh@10 -- # set +x 00:14:09.129 04:11:23 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:09.129 04:11:23 -- target/discovery.sh@35 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 10.0.0.2 -s 4430 00:14:09.129 04:11:23 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:09.129 04:11:23 -- common/autotest_common.sh@10 -- # set +x 00:14:09.129 04:11:23 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:09.129 04:11:23 -- target/discovery.sh@37 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80ef6226-405e-ee11-906e-a4bf01973fda --hostid=80ef6226-405e-ee11-906e-a4bf01973fda -t tcp -a 10.0.0.2 -s 4420 00:14:09.386 00:14:09.386 Discovery Log Number of Records 6, Generation counter 6 00:14:09.386 =====Discovery Log Entry 0====== 00:14:09.386 trtype: tcp 00:14:09.386 adrfam: ipv4 00:14:09.386 subtype: current discovery subsystem 00:14:09.386 treq: not required 00:14:09.386 portid: 0 00:14:09.386 trsvcid: 4420 00:14:09.386 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:14:09.386 traddr: 10.0.0.2 00:14:09.386 eflags: explicit discovery connections, duplicate discovery information 00:14:09.386 sectype: none 00:14:09.386 =====Discovery Log Entry 1====== 00:14:09.386 trtype: tcp 00:14:09.386 adrfam: ipv4 00:14:09.386 subtype: nvme subsystem 00:14:09.387 treq: not required 00:14:09.387 portid: 0 00:14:09.387 trsvcid: 4420 00:14:09.387 subnqn: nqn.2016-06.io.spdk:cnode1 00:14:09.387 traddr: 10.0.0.2 00:14:09.387 eflags: none 00:14:09.387 sectype: none 00:14:09.387 =====Discovery Log Entry 2====== 00:14:09.387 trtype: tcp 00:14:09.387 adrfam: ipv4 00:14:09.387 subtype: nvme subsystem 00:14:09.387 treq: not required 00:14:09.387 portid: 0 00:14:09.387 trsvcid: 4420 00:14:09.387 subnqn: nqn.2016-06.io.spdk:cnode2 00:14:09.387 traddr: 10.0.0.2 00:14:09.387 eflags: none 00:14:09.387 sectype: none 00:14:09.387 =====Discovery Log Entry 3====== 00:14:09.387 trtype: tcp 00:14:09.387 adrfam: ipv4 00:14:09.387 subtype: nvme subsystem 00:14:09.387 treq: not required 00:14:09.387 portid: 0 00:14:09.387 trsvcid: 4420 00:14:09.387 subnqn: nqn.2016-06.io.spdk:cnode3 00:14:09.387 traddr: 10.0.0.2 00:14:09.387 eflags: none 00:14:09.387 sectype: none 00:14:09.387 =====Discovery Log Entry 4====== 00:14:09.387 trtype: tcp 00:14:09.387 adrfam: ipv4 00:14:09.387 subtype: nvme subsystem 00:14:09.387 treq: not required 00:14:09.387 portid: 0 00:14:09.387 trsvcid: 4420 00:14:09.387 subnqn: nqn.2016-06.io.spdk:cnode4 00:14:09.387 traddr: 10.0.0.2 00:14:09.387 eflags: none 00:14:09.387 sectype: none 00:14:09.387 =====Discovery Log Entry 5====== 00:14:09.387 trtype: tcp 00:14:09.387 adrfam: ipv4 00:14:09.387 subtype: discovery subsystem referral 00:14:09.387 treq: not required 00:14:09.387 portid: 0 00:14:09.387 trsvcid: 4430 00:14:09.387 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:14:09.387 traddr: 10.0.0.2 00:14:09.387 eflags: none 00:14:09.387 sectype: none 00:14:09.387 04:11:23 -- target/discovery.sh@39 -- # echo 'Perform nvmf subsystem discovery via RPC' 00:14:09.387 Perform nvmf subsystem discovery via RPC 00:14:09.387 04:11:23 -- target/discovery.sh@40 -- # rpc_cmd nvmf_get_subsystems 00:14:09.387 04:11:23 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:09.387 04:11:23 -- common/autotest_common.sh@10 -- # set +x 00:14:09.387 [2024-05-14 04:11:23.771992] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:14:09.387 [ 00:14:09.387 { 00:14:09.387 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:14:09.387 "subtype": "Discovery", 00:14:09.387 "listen_addresses": [ 00:14:09.387 { 00:14:09.387 "transport": "TCP", 00:14:09.387 "trtype": "TCP", 00:14:09.387 "adrfam": "IPv4", 00:14:09.387 "traddr": "10.0.0.2", 00:14:09.387 "trsvcid": "4420" 00:14:09.387 } 00:14:09.387 ], 00:14:09.387 "allow_any_host": true, 00:14:09.387 "hosts": [] 00:14:09.387 }, 00:14:09.387 { 00:14:09.387 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:09.387 "subtype": "NVMe", 00:14:09.387 "listen_addresses": [ 00:14:09.387 { 00:14:09.387 "transport": "TCP", 00:14:09.387 "trtype": "TCP", 00:14:09.387 "adrfam": "IPv4", 00:14:09.387 "traddr": "10.0.0.2", 00:14:09.387 "trsvcid": "4420" 00:14:09.387 } 00:14:09.387 ], 00:14:09.387 "allow_any_host": true, 00:14:09.387 "hosts": [], 00:14:09.387 "serial_number": "SPDK00000000000001", 00:14:09.387 "model_number": "SPDK bdev Controller", 00:14:09.387 "max_namespaces": 32, 00:14:09.387 "min_cntlid": 1, 00:14:09.387 "max_cntlid": 65519, 00:14:09.387 "namespaces": [ 00:14:09.387 { 00:14:09.387 "nsid": 1, 00:14:09.387 "bdev_name": "Null1", 00:14:09.387 "name": "Null1", 00:14:09.387 "nguid": "E0B64946A6B140A6AA9D704FAA280EE6", 00:14:09.387 "uuid": "e0b64946-a6b1-40a6-aa9d-704faa280ee6" 00:14:09.387 } 00:14:09.387 ] 00:14:09.387 }, 00:14:09.387 { 00:14:09.387 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:14:09.387 "subtype": "NVMe", 00:14:09.387 "listen_addresses": [ 00:14:09.387 { 00:14:09.387 "transport": "TCP", 00:14:09.387 "trtype": "TCP", 00:14:09.387 "adrfam": "IPv4", 00:14:09.387 "traddr": "10.0.0.2", 00:14:09.387 "trsvcid": "4420" 00:14:09.387 } 00:14:09.387 ], 00:14:09.387 "allow_any_host": true, 00:14:09.387 "hosts": [], 00:14:09.387 "serial_number": "SPDK00000000000002", 00:14:09.387 "model_number": "SPDK bdev Controller", 00:14:09.387 "max_namespaces": 32, 00:14:09.387 "min_cntlid": 1, 00:14:09.387 "max_cntlid": 65519, 00:14:09.387 "namespaces": [ 00:14:09.387 { 00:14:09.387 "nsid": 1, 00:14:09.387 "bdev_name": "Null2", 00:14:09.387 "name": "Null2", 00:14:09.387 "nguid": "4DA52FBC73A64D8495E67000BD04E9E2", 00:14:09.387 "uuid": "4da52fbc-73a6-4d84-95e6-7000bd04e9e2" 00:14:09.387 } 00:14:09.387 ] 00:14:09.387 }, 00:14:09.387 { 00:14:09.387 "nqn": "nqn.2016-06.io.spdk:cnode3", 00:14:09.387 "subtype": "NVMe", 00:14:09.387 "listen_addresses": [ 00:14:09.387 { 00:14:09.387 "transport": "TCP", 00:14:09.387 "trtype": "TCP", 00:14:09.387 "adrfam": "IPv4", 00:14:09.387 "traddr": "10.0.0.2", 00:14:09.387 "trsvcid": "4420" 00:14:09.387 } 00:14:09.387 ], 00:14:09.387 "allow_any_host": true, 00:14:09.387 "hosts": [], 00:14:09.387 "serial_number": "SPDK00000000000003", 00:14:09.387 "model_number": "SPDK bdev Controller", 00:14:09.387 "max_namespaces": 32, 00:14:09.387 "min_cntlid": 1, 00:14:09.387 "max_cntlid": 65519, 00:14:09.387 "namespaces": [ 00:14:09.387 { 00:14:09.387 "nsid": 1, 00:14:09.387 "bdev_name": "Null3", 00:14:09.387 "name": "Null3", 00:14:09.387 "nguid": "3A4876798D834771B711F6C743B851D3", 00:14:09.387 "uuid": "3a487679-8d83-4771-b711-f6c743b851d3" 00:14:09.387 } 00:14:09.387 ] 00:14:09.387 }, 00:14:09.387 { 00:14:09.387 "nqn": "nqn.2016-06.io.spdk:cnode4", 00:14:09.387 "subtype": "NVMe", 00:14:09.387 "listen_addresses": [ 00:14:09.387 { 00:14:09.387 "transport": "TCP", 00:14:09.387 "trtype": "TCP", 00:14:09.387 "adrfam": "IPv4", 00:14:09.387 "traddr": "10.0.0.2", 00:14:09.387 "trsvcid": "4420" 00:14:09.387 } 00:14:09.387 ], 00:14:09.387 "allow_any_host": true, 00:14:09.387 "hosts": [], 00:14:09.387 "serial_number": "SPDK00000000000004", 00:14:09.387 "model_number": "SPDK bdev Controller", 00:14:09.387 "max_namespaces": 32, 00:14:09.387 "min_cntlid": 1, 00:14:09.387 "max_cntlid": 65519, 00:14:09.387 "namespaces": [ 00:14:09.387 { 00:14:09.387 "nsid": 1, 00:14:09.387 "bdev_name": "Null4", 00:14:09.387 "name": "Null4", 00:14:09.387 "nguid": "CC35F64992DA41D78E8A232200F34A23", 00:14:09.387 "uuid": "cc35f649-92da-41d7-8e8a-232200f34a23" 00:14:09.387 } 00:14:09.387 ] 00:14:09.387 } 00:14:09.387 ] 00:14:09.387 04:11:23 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:09.387 04:11:23 -- target/discovery.sh@42 -- # seq 1 4 00:14:09.387 04:11:23 -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:14:09.387 04:11:23 -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:09.387 04:11:23 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:09.387 04:11:23 -- common/autotest_common.sh@10 -- # set +x 00:14:09.387 04:11:23 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:09.387 04:11:23 -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null1 00:14:09.387 04:11:23 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:09.387 04:11:23 -- common/autotest_common.sh@10 -- # set +x 00:14:09.387 04:11:23 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:09.387 04:11:23 -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:14:09.387 04:11:23 -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:14:09.387 04:11:23 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:09.387 04:11:23 -- common/autotest_common.sh@10 -- # set +x 00:14:09.387 04:11:23 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:09.387 04:11:23 -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null2 00:14:09.387 04:11:23 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:09.387 04:11:23 -- common/autotest_common.sh@10 -- # set +x 00:14:09.387 04:11:23 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:09.387 04:11:23 -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:14:09.387 04:11:23 -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:14:09.387 04:11:23 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:09.387 04:11:23 -- common/autotest_common.sh@10 -- # set +x 00:14:09.387 04:11:23 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:09.387 04:11:23 -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null3 00:14:09.387 04:11:23 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:09.387 04:11:23 -- common/autotest_common.sh@10 -- # set +x 00:14:09.387 04:11:23 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:09.387 04:11:23 -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:14:09.387 04:11:23 -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:14:09.387 04:11:23 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:09.387 04:11:23 -- common/autotest_common.sh@10 -- # set +x 00:14:09.387 04:11:23 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:09.387 04:11:23 -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null4 00:14:09.387 04:11:23 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:09.387 04:11:23 -- common/autotest_common.sh@10 -- # set +x 00:14:09.387 04:11:23 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:09.387 04:11:23 -- target/discovery.sh@47 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 10.0.0.2 -s 4430 00:14:09.387 04:11:23 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:09.387 04:11:23 -- common/autotest_common.sh@10 -- # set +x 00:14:09.387 04:11:23 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:09.387 04:11:23 -- target/discovery.sh@49 -- # rpc_cmd bdev_get_bdevs 00:14:09.387 04:11:23 -- target/discovery.sh@49 -- # jq -r '.[].name' 00:14:09.387 04:11:23 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:09.387 04:11:23 -- common/autotest_common.sh@10 -- # set +x 00:14:09.387 04:11:23 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:09.387 04:11:23 -- target/discovery.sh@49 -- # check_bdevs= 00:14:09.387 04:11:23 -- target/discovery.sh@50 -- # '[' -n '' ']' 00:14:09.387 04:11:23 -- target/discovery.sh@55 -- # trap - SIGINT SIGTERM EXIT 00:14:09.387 04:11:23 -- target/discovery.sh@57 -- # nvmftestfini 00:14:09.387 04:11:23 -- nvmf/common.sh@476 -- # nvmfcleanup 00:14:09.387 04:11:23 -- nvmf/common.sh@116 -- # sync 00:14:09.387 04:11:23 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:14:09.387 04:11:23 -- nvmf/common.sh@119 -- # set +e 00:14:09.387 04:11:23 -- nvmf/common.sh@120 -- # for i in {1..20} 00:14:09.387 04:11:23 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:14:09.387 rmmod nvme_tcp 00:14:09.387 rmmod nvme_fabrics 00:14:09.387 rmmod nvme_keyring 00:14:09.387 04:11:23 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:14:09.387 04:11:23 -- nvmf/common.sh@123 -- # set -e 00:14:09.387 04:11:23 -- nvmf/common.sh@124 -- # return 0 00:14:09.387 04:11:23 -- nvmf/common.sh@477 -- # '[' -n 3917773 ']' 00:14:09.387 04:11:23 -- nvmf/common.sh@478 -- # killprocess 3917773 00:14:09.387 04:11:23 -- common/autotest_common.sh@926 -- # '[' -z 3917773 ']' 00:14:09.387 04:11:23 -- common/autotest_common.sh@930 -- # kill -0 3917773 00:14:09.387 04:11:23 -- common/autotest_common.sh@931 -- # uname 00:14:09.387 04:11:23 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:14:09.387 04:11:23 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3917773 00:14:09.645 04:11:23 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:14:09.645 04:11:23 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:14:09.645 04:11:23 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3917773' 00:14:09.645 killing process with pid 3917773 00:14:09.645 04:11:23 -- common/autotest_common.sh@945 -- # kill 3917773 00:14:09.645 [2024-05-14 04:11:23.996607] app.c: 883:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:14:09.645 04:11:23 -- common/autotest_common.sh@950 -- # wait 3917773 00:14:09.903 04:11:24 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:14:09.903 04:11:24 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:14:09.903 04:11:24 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:14:09.903 04:11:24 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:09.903 04:11:24 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:14:09.903 04:11:24 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:09.903 04:11:24 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:09.904 04:11:24 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:12.439 04:11:26 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:14:12.439 00:14:12.439 real 0m9.632s 00:14:12.439 user 0m7.270s 00:14:12.439 sys 0m4.629s 00:14:12.439 04:11:26 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:12.439 04:11:26 -- common/autotest_common.sh@10 -- # set +x 00:14:12.439 ************************************ 00:14:12.439 END TEST nvmf_discovery 00:14:12.439 ************************************ 00:14:12.439 04:11:26 -- nvmf/nvmf.sh@26 -- # run_test nvmf_referrals /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:14:12.439 04:11:26 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:14:12.439 04:11:26 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:14:12.439 04:11:26 -- common/autotest_common.sh@10 -- # set +x 00:14:12.439 ************************************ 00:14:12.439 START TEST nvmf_referrals 00:14:12.439 ************************************ 00:14:12.439 04:11:26 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:14:12.439 * Looking for test storage... 00:14:12.439 * Found test storage at /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target 00:14:12.439 04:11:26 -- target/referrals.sh@9 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/common.sh 00:14:12.439 04:11:26 -- nvmf/common.sh@7 -- # uname -s 00:14:12.439 04:11:26 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:12.439 04:11:26 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:12.439 04:11:26 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:12.439 04:11:26 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:12.439 04:11:26 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:12.439 04:11:26 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:12.439 04:11:26 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:12.439 04:11:26 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:12.439 04:11:26 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:12.439 04:11:26 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:12.439 04:11:26 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80ef6226-405e-ee11-906e-a4bf01973fda 00:14:12.439 04:11:26 -- nvmf/common.sh@18 -- # NVME_HOSTID=80ef6226-405e-ee11-906e-a4bf01973fda 00:14:12.439 04:11:26 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:12.439 04:11:26 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:12.439 04:11:26 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:14:12.439 04:11:26 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/common.sh 00:14:12.439 04:11:26 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:12.439 04:11:26 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:12.439 04:11:26 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:12.439 04:11:26 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:12.439 04:11:26 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:12.439 04:11:26 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:12.439 04:11:26 -- paths/export.sh@5 -- # export PATH 00:14:12.439 04:11:26 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:12.440 04:11:26 -- nvmf/common.sh@46 -- # : 0 00:14:12.440 04:11:26 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:14:12.440 04:11:26 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:14:12.440 04:11:26 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:14:12.440 04:11:26 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:12.440 04:11:26 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:12.440 04:11:26 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:14:12.440 04:11:26 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:14:12.440 04:11:26 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:14:12.440 04:11:26 -- target/referrals.sh@11 -- # NVMF_REFERRAL_IP_1=127.0.0.2 00:14:12.440 04:11:26 -- target/referrals.sh@12 -- # NVMF_REFERRAL_IP_2=127.0.0.3 00:14:12.440 04:11:26 -- target/referrals.sh@13 -- # NVMF_REFERRAL_IP_3=127.0.0.4 00:14:12.440 04:11:26 -- target/referrals.sh@14 -- # NVMF_PORT_REFERRAL=4430 00:14:12.440 04:11:26 -- target/referrals.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:14:12.440 04:11:26 -- target/referrals.sh@16 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:14:12.440 04:11:26 -- target/referrals.sh@37 -- # nvmftestinit 00:14:12.440 04:11:26 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:14:12.440 04:11:26 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:12.440 04:11:26 -- nvmf/common.sh@436 -- # prepare_net_devs 00:14:12.440 04:11:26 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:14:12.440 04:11:26 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:14:12.440 04:11:26 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:12.440 04:11:26 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:12.440 04:11:26 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:12.440 04:11:26 -- nvmf/common.sh@402 -- # [[ phy-fallback != virt ]] 00:14:12.440 04:11:26 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:14:12.440 04:11:26 -- nvmf/common.sh@284 -- # xtrace_disable 00:14:12.440 04:11:26 -- common/autotest_common.sh@10 -- # set +x 00:14:19.025 04:11:32 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:14:19.025 04:11:32 -- nvmf/common.sh@290 -- # pci_devs=() 00:14:19.025 04:11:32 -- nvmf/common.sh@290 -- # local -a pci_devs 00:14:19.025 04:11:32 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:14:19.025 04:11:32 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:14:19.025 04:11:32 -- nvmf/common.sh@292 -- # pci_drivers=() 00:14:19.025 04:11:32 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:14:19.025 04:11:32 -- nvmf/common.sh@294 -- # net_devs=() 00:14:19.025 04:11:32 -- nvmf/common.sh@294 -- # local -ga net_devs 00:14:19.025 04:11:32 -- nvmf/common.sh@295 -- # e810=() 00:14:19.025 04:11:32 -- nvmf/common.sh@295 -- # local -ga e810 00:14:19.025 04:11:32 -- nvmf/common.sh@296 -- # x722=() 00:14:19.025 04:11:32 -- nvmf/common.sh@296 -- # local -ga x722 00:14:19.025 04:11:32 -- nvmf/common.sh@297 -- # mlx=() 00:14:19.025 04:11:32 -- nvmf/common.sh@297 -- # local -ga mlx 00:14:19.025 04:11:32 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:19.025 04:11:32 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:19.025 04:11:32 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:19.025 04:11:32 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:19.025 04:11:32 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:19.025 04:11:32 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:19.025 04:11:32 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:19.025 04:11:32 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:19.025 04:11:32 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:19.025 04:11:32 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:19.025 04:11:32 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:19.025 04:11:32 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:14:19.025 04:11:32 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:14:19.025 04:11:32 -- nvmf/common.sh@326 -- # [[ '' == mlx5 ]] 00:14:19.025 04:11:32 -- nvmf/common.sh@328 -- # [[ '' == e810 ]] 00:14:19.025 04:11:32 -- nvmf/common.sh@330 -- # [[ '' == x722 ]] 00:14:19.025 04:11:32 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:14:19.025 04:11:32 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:14:19.025 04:11:32 -- nvmf/common.sh@340 -- # echo 'Found 0000:27:00.0 (0x8086 - 0x159b)' 00:14:19.025 Found 0000:27:00.0 (0x8086 - 0x159b) 00:14:19.025 04:11:32 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:14:19.025 04:11:32 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:14:19.025 04:11:32 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:19.025 04:11:32 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:19.025 04:11:32 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:14:19.025 04:11:32 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:14:19.025 04:11:32 -- nvmf/common.sh@340 -- # echo 'Found 0000:27:00.1 (0x8086 - 0x159b)' 00:14:19.025 Found 0000:27:00.1 (0x8086 - 0x159b) 00:14:19.025 04:11:32 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:14:19.025 04:11:32 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:14:19.025 04:11:32 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:19.025 04:11:32 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:19.025 04:11:32 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:14:19.025 04:11:32 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:14:19.025 04:11:32 -- nvmf/common.sh@371 -- # [[ '' == e810 ]] 00:14:19.025 04:11:32 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:14:19.025 04:11:32 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:19.025 04:11:32 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:14:19.025 04:11:32 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:19.025 04:11:32 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:27:00.0: cvl_0_0' 00:14:19.025 Found net devices under 0000:27:00.0: cvl_0_0 00:14:19.025 04:11:32 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:14:19.025 04:11:32 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:14:19.025 04:11:32 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:19.025 04:11:32 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:14:19.025 04:11:32 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:19.025 04:11:32 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:27:00.1: cvl_0_1' 00:14:19.025 Found net devices under 0000:27:00.1: cvl_0_1 00:14:19.025 04:11:32 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:14:19.025 04:11:32 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:14:19.025 04:11:32 -- nvmf/common.sh@402 -- # is_hw=yes 00:14:19.025 04:11:32 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:14:19.025 04:11:32 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:14:19.025 04:11:32 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:14:19.025 04:11:32 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:19.025 04:11:32 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:19.025 04:11:32 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:19.025 04:11:32 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:14:19.025 04:11:32 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:19.025 04:11:32 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:19.025 04:11:32 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:14:19.025 04:11:32 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:19.025 04:11:32 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:19.025 04:11:32 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:14:19.025 04:11:32 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:14:19.025 04:11:32 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:14:19.025 04:11:32 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:19.025 04:11:32 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:19.025 04:11:32 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:19.025 04:11:32 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:14:19.025 04:11:32 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:19.025 04:11:32 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:19.025 04:11:32 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:19.025 04:11:32 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:14:19.025 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:19.025 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.231 ms 00:14:19.025 00:14:19.025 --- 10.0.0.2 ping statistics --- 00:14:19.025 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:19.025 rtt min/avg/max/mdev = 0.231/0.231/0.231/0.000 ms 00:14:19.025 04:11:32 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:19.025 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:19.025 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.112 ms 00:14:19.025 00:14:19.025 --- 10.0.0.1 ping statistics --- 00:14:19.025 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:19.025 rtt min/avg/max/mdev = 0.112/0.112/0.112/0.000 ms 00:14:19.025 04:11:32 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:19.025 04:11:32 -- nvmf/common.sh@410 -- # return 0 00:14:19.025 04:11:32 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:14:19.025 04:11:32 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:19.025 04:11:32 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:14:19.025 04:11:32 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:14:19.025 04:11:32 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:19.025 04:11:32 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:14:19.025 04:11:32 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:14:19.025 04:11:32 -- target/referrals.sh@38 -- # nvmfappstart -m 0xF 00:14:19.025 04:11:32 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:14:19.025 04:11:32 -- common/autotest_common.sh@712 -- # xtrace_disable 00:14:19.025 04:11:32 -- common/autotest_common.sh@10 -- # set +x 00:14:19.025 04:11:32 -- nvmf/common.sh@469 -- # nvmfpid=3922212 00:14:19.025 04:11:32 -- nvmf/common.sh@470 -- # waitforlisten 3922212 00:14:19.025 04:11:32 -- common/autotest_common.sh@819 -- # '[' -z 3922212 ']' 00:14:19.025 04:11:32 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:19.025 04:11:32 -- common/autotest_common.sh@824 -- # local max_retries=100 00:14:19.025 04:11:32 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:19.025 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:19.025 04:11:32 -- common/autotest_common.sh@828 -- # xtrace_disable 00:14:19.025 04:11:32 -- common/autotest_common.sh@10 -- # set +x 00:14:19.025 04:11:32 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:14:19.025 [2024-05-14 04:11:32.980382] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:14:19.025 [2024-05-14 04:11:32.980511] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:19.025 EAL: No free 2048 kB hugepages reported on node 1 00:14:19.025 [2024-05-14 04:11:33.116821] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:19.025 [2024-05-14 04:11:33.213309] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:14:19.025 [2024-05-14 04:11:33.213502] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:19.025 [2024-05-14 04:11:33.213517] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:19.025 [2024-05-14 04:11:33.213527] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:19.025 [2024-05-14 04:11:33.213619] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:19.025 [2024-05-14 04:11:33.213646] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:14:19.025 [2024-05-14 04:11:33.213751] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:19.026 [2024-05-14 04:11:33.213761] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:14:19.286 04:11:33 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:14:19.286 04:11:33 -- common/autotest_common.sh@852 -- # return 0 00:14:19.286 04:11:33 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:14:19.286 04:11:33 -- common/autotest_common.sh@718 -- # xtrace_disable 00:14:19.286 04:11:33 -- common/autotest_common.sh@10 -- # set +x 00:14:19.286 04:11:33 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:19.286 04:11:33 -- target/referrals.sh@40 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:19.286 04:11:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:19.286 04:11:33 -- common/autotest_common.sh@10 -- # set +x 00:14:19.286 [2024-05-14 04:11:33.736466] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:19.286 04:11:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:19.286 04:11:33 -- target/referrals.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 10.0.0.2 -s 8009 discovery 00:14:19.286 04:11:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:19.286 04:11:33 -- common/autotest_common.sh@10 -- # set +x 00:14:19.286 [2024-05-14 04:11:33.752702] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:14:19.286 04:11:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:19.286 04:11:33 -- target/referrals.sh@44 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 00:14:19.286 04:11:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:19.286 04:11:33 -- common/autotest_common.sh@10 -- # set +x 00:14:19.286 04:11:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:19.286 04:11:33 -- target/referrals.sh@45 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.3 -s 4430 00:14:19.286 04:11:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:19.286 04:11:33 -- common/autotest_common.sh@10 -- # set +x 00:14:19.286 04:11:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:19.286 04:11:33 -- target/referrals.sh@46 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.4 -s 4430 00:14:19.286 04:11:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:19.287 04:11:33 -- common/autotest_common.sh@10 -- # set +x 00:14:19.287 04:11:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:19.287 04:11:33 -- target/referrals.sh@48 -- # rpc_cmd nvmf_discovery_get_referrals 00:14:19.287 04:11:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:19.287 04:11:33 -- target/referrals.sh@48 -- # jq length 00:14:19.287 04:11:33 -- common/autotest_common.sh@10 -- # set +x 00:14:19.287 04:11:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:19.287 04:11:33 -- target/referrals.sh@48 -- # (( 3 == 3 )) 00:14:19.287 04:11:33 -- target/referrals.sh@49 -- # get_referral_ips rpc 00:14:19.287 04:11:33 -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:14:19.287 04:11:33 -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:14:19.287 04:11:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:19.287 04:11:33 -- common/autotest_common.sh@10 -- # set +x 00:14:19.287 04:11:33 -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:14:19.287 04:11:33 -- target/referrals.sh@21 -- # sort 00:14:19.287 04:11:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:19.287 04:11:33 -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:14:19.287 04:11:33 -- target/referrals.sh@49 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:14:19.287 04:11:33 -- target/referrals.sh@50 -- # get_referral_ips nvme 00:14:19.287 04:11:33 -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:14:19.287 04:11:33 -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:14:19.287 04:11:33 -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80ef6226-405e-ee11-906e-a4bf01973fda --hostid=80ef6226-405e-ee11-906e-a4bf01973fda -t tcp -a 10.0.0.2 -s 8009 -o json 00:14:19.287 04:11:33 -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:14:19.287 04:11:33 -- target/referrals.sh@26 -- # sort 00:14:19.547 04:11:33 -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:14:19.547 04:11:33 -- target/referrals.sh@50 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:14:19.547 04:11:33 -- target/referrals.sh@52 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 00:14:19.547 04:11:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:19.547 04:11:33 -- common/autotest_common.sh@10 -- # set +x 00:14:19.547 04:11:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:19.547 04:11:33 -- target/referrals.sh@53 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.3 -s 4430 00:14:19.547 04:11:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:19.547 04:11:33 -- common/autotest_common.sh@10 -- # set +x 00:14:19.547 04:11:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:19.547 04:11:33 -- target/referrals.sh@54 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.4 -s 4430 00:14:19.547 04:11:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:19.547 04:11:33 -- common/autotest_common.sh@10 -- # set +x 00:14:19.547 04:11:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:19.547 04:11:33 -- target/referrals.sh@56 -- # rpc_cmd nvmf_discovery_get_referrals 00:14:19.547 04:11:34 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:19.547 04:11:33 -- target/referrals.sh@56 -- # jq length 00:14:19.547 04:11:34 -- common/autotest_common.sh@10 -- # set +x 00:14:19.547 04:11:34 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:19.547 04:11:34 -- target/referrals.sh@56 -- # (( 0 == 0 )) 00:14:19.547 04:11:34 -- target/referrals.sh@57 -- # get_referral_ips nvme 00:14:19.547 04:11:34 -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:14:19.547 04:11:34 -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:14:19.547 04:11:34 -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80ef6226-405e-ee11-906e-a4bf01973fda --hostid=80ef6226-405e-ee11-906e-a4bf01973fda -t tcp -a 10.0.0.2 -s 8009 -o json 00:14:19.547 04:11:34 -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:14:19.547 04:11:34 -- target/referrals.sh@26 -- # sort 00:14:19.806 04:11:34 -- target/referrals.sh@26 -- # echo 00:14:19.806 04:11:34 -- target/referrals.sh@57 -- # [[ '' == '' ]] 00:14:19.806 04:11:34 -- target/referrals.sh@60 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n discovery 00:14:19.806 04:11:34 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:19.806 04:11:34 -- common/autotest_common.sh@10 -- # set +x 00:14:19.806 04:11:34 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:19.806 04:11:34 -- target/referrals.sh@62 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:14:19.806 04:11:34 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:19.806 04:11:34 -- common/autotest_common.sh@10 -- # set +x 00:14:19.806 04:11:34 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:19.806 04:11:34 -- target/referrals.sh@65 -- # get_referral_ips rpc 00:14:19.806 04:11:34 -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:14:19.806 04:11:34 -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:14:19.806 04:11:34 -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:14:19.806 04:11:34 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:19.806 04:11:34 -- target/referrals.sh@21 -- # sort 00:14:19.806 04:11:34 -- common/autotest_common.sh@10 -- # set +x 00:14:19.806 04:11:34 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:19.806 04:11:34 -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.2 00:14:19.806 04:11:34 -- target/referrals.sh@65 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:14:19.806 04:11:34 -- target/referrals.sh@66 -- # get_referral_ips nvme 00:14:19.806 04:11:34 -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:14:19.806 04:11:34 -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:14:19.806 04:11:34 -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80ef6226-405e-ee11-906e-a4bf01973fda --hostid=80ef6226-405e-ee11-906e-a4bf01973fda -t tcp -a 10.0.0.2 -s 8009 -o json 00:14:19.806 04:11:34 -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:14:19.806 04:11:34 -- target/referrals.sh@26 -- # sort 00:14:19.806 04:11:34 -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.2 00:14:19.806 04:11:34 -- target/referrals.sh@66 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:14:19.806 04:11:34 -- target/referrals.sh@67 -- # get_discovery_entries 'nvme subsystem' 00:14:19.806 04:11:34 -- target/referrals.sh@67 -- # jq -r .subnqn 00:14:19.806 04:11:34 -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:14:19.806 04:11:34 -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80ef6226-405e-ee11-906e-a4bf01973fda --hostid=80ef6226-405e-ee11-906e-a4bf01973fda -t tcp -a 10.0.0.2 -s 8009 -o json 00:14:19.806 04:11:34 -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:14:20.066 04:11:34 -- target/referrals.sh@67 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:14:20.066 04:11:34 -- target/referrals.sh@68 -- # get_discovery_entries 'discovery subsystem referral' 00:14:20.066 04:11:34 -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:14:20.066 04:11:34 -- target/referrals.sh@68 -- # jq -r .subnqn 00:14:20.066 04:11:34 -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80ef6226-405e-ee11-906e-a4bf01973fda --hostid=80ef6226-405e-ee11-906e-a4bf01973fda -t tcp -a 10.0.0.2 -s 8009 -o json 00:14:20.066 04:11:34 -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:14:20.066 04:11:34 -- target/referrals.sh@68 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:14:20.066 04:11:34 -- target/referrals.sh@71 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:14:20.066 04:11:34 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:20.066 04:11:34 -- common/autotest_common.sh@10 -- # set +x 00:14:20.066 04:11:34 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:20.066 04:11:34 -- target/referrals.sh@73 -- # get_referral_ips rpc 00:14:20.066 04:11:34 -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:14:20.066 04:11:34 -- target/referrals.sh@21 -- # sort 00:14:20.066 04:11:34 -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:14:20.066 04:11:34 -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:14:20.066 04:11:34 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:20.066 04:11:34 -- common/autotest_common.sh@10 -- # set +x 00:14:20.066 04:11:34 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:20.067 04:11:34 -- target/referrals.sh@21 -- # echo 127.0.0.2 00:14:20.067 04:11:34 -- target/referrals.sh@73 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:14:20.067 04:11:34 -- target/referrals.sh@74 -- # get_referral_ips nvme 00:14:20.067 04:11:34 -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:14:20.067 04:11:34 -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:14:20.067 04:11:34 -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80ef6226-405e-ee11-906e-a4bf01973fda --hostid=80ef6226-405e-ee11-906e-a4bf01973fda -t tcp -a 10.0.0.2 -s 8009 -o json 00:14:20.067 04:11:34 -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:14:20.067 04:11:34 -- target/referrals.sh@26 -- # sort 00:14:20.067 04:11:34 -- target/referrals.sh@26 -- # echo 127.0.0.2 00:14:20.067 04:11:34 -- target/referrals.sh@74 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:14:20.067 04:11:34 -- target/referrals.sh@75 -- # get_discovery_entries 'nvme subsystem' 00:14:20.067 04:11:34 -- target/referrals.sh@75 -- # jq -r .subnqn 00:14:20.067 04:11:34 -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:14:20.327 04:11:34 -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80ef6226-405e-ee11-906e-a4bf01973fda --hostid=80ef6226-405e-ee11-906e-a4bf01973fda -t tcp -a 10.0.0.2 -s 8009 -o json 00:14:20.327 04:11:34 -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:14:20.327 04:11:34 -- target/referrals.sh@75 -- # [[ '' == '' ]] 00:14:20.327 04:11:34 -- target/referrals.sh@76 -- # get_discovery_entries 'discovery subsystem referral' 00:14:20.327 04:11:34 -- target/referrals.sh@76 -- # jq -r .subnqn 00:14:20.327 04:11:34 -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:14:20.327 04:11:34 -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80ef6226-405e-ee11-906e-a4bf01973fda --hostid=80ef6226-405e-ee11-906e-a4bf01973fda -t tcp -a 10.0.0.2 -s 8009 -o json 00:14:20.327 04:11:34 -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:14:20.327 04:11:34 -- target/referrals.sh@76 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:14:20.327 04:11:34 -- target/referrals.sh@79 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2014-08.org.nvmexpress.discovery 00:14:20.327 04:11:34 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:20.327 04:11:34 -- common/autotest_common.sh@10 -- # set +x 00:14:20.327 04:11:34 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:20.327 04:11:34 -- target/referrals.sh@82 -- # rpc_cmd nvmf_discovery_get_referrals 00:14:20.327 04:11:34 -- target/referrals.sh@82 -- # jq length 00:14:20.327 04:11:34 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:20.327 04:11:34 -- common/autotest_common.sh@10 -- # set +x 00:14:20.327 04:11:34 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:20.327 04:11:34 -- target/referrals.sh@82 -- # (( 0 == 0 )) 00:14:20.327 04:11:34 -- target/referrals.sh@83 -- # get_referral_ips nvme 00:14:20.327 04:11:34 -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:14:20.327 04:11:34 -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:14:20.327 04:11:34 -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80ef6226-405e-ee11-906e-a4bf01973fda --hostid=80ef6226-405e-ee11-906e-a4bf01973fda -t tcp -a 10.0.0.2 -s 8009 -o json 00:14:20.327 04:11:34 -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:14:20.327 04:11:34 -- target/referrals.sh@26 -- # sort 00:14:20.587 04:11:34 -- target/referrals.sh@26 -- # echo 00:14:20.587 04:11:34 -- target/referrals.sh@83 -- # [[ '' == '' ]] 00:14:20.587 04:11:34 -- target/referrals.sh@85 -- # trap - SIGINT SIGTERM EXIT 00:14:20.587 04:11:34 -- target/referrals.sh@86 -- # nvmftestfini 00:14:20.587 04:11:34 -- nvmf/common.sh@476 -- # nvmfcleanup 00:14:20.587 04:11:34 -- nvmf/common.sh@116 -- # sync 00:14:20.587 04:11:34 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:14:20.587 04:11:34 -- nvmf/common.sh@119 -- # set +e 00:14:20.587 04:11:34 -- nvmf/common.sh@120 -- # for i in {1..20} 00:14:20.587 04:11:34 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:14:20.587 rmmod nvme_tcp 00:14:20.587 rmmod nvme_fabrics 00:14:20.587 rmmod nvme_keyring 00:14:20.587 04:11:35 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:14:20.587 04:11:35 -- nvmf/common.sh@123 -- # set -e 00:14:20.587 04:11:35 -- nvmf/common.sh@124 -- # return 0 00:14:20.587 04:11:35 -- nvmf/common.sh@477 -- # '[' -n 3922212 ']' 00:14:20.587 04:11:35 -- nvmf/common.sh@478 -- # killprocess 3922212 00:14:20.587 04:11:35 -- common/autotest_common.sh@926 -- # '[' -z 3922212 ']' 00:14:20.588 04:11:35 -- common/autotest_common.sh@930 -- # kill -0 3922212 00:14:20.588 04:11:35 -- common/autotest_common.sh@931 -- # uname 00:14:20.588 04:11:35 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:14:20.588 04:11:35 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3922212 00:14:20.588 04:11:35 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:14:20.588 04:11:35 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:14:20.588 04:11:35 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3922212' 00:14:20.588 killing process with pid 3922212 00:14:20.588 04:11:35 -- common/autotest_common.sh@945 -- # kill 3922212 00:14:20.588 04:11:35 -- common/autotest_common.sh@950 -- # wait 3922212 00:14:21.159 04:11:35 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:14:21.159 04:11:35 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:14:21.159 04:11:35 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:14:21.159 04:11:35 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:21.159 04:11:35 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:14:21.159 04:11:35 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:21.159 04:11:35 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:21.159 04:11:35 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:23.071 04:11:37 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:14:23.071 00:14:23.071 real 0m11.059s 00:14:23.071 user 0m11.090s 00:14:23.071 sys 0m5.343s 00:14:23.071 04:11:37 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:23.071 04:11:37 -- common/autotest_common.sh@10 -- # set +x 00:14:23.071 ************************************ 00:14:23.071 END TEST nvmf_referrals 00:14:23.071 ************************************ 00:14:23.071 04:11:37 -- nvmf/nvmf.sh@27 -- # run_test nvmf_connect_disconnect /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:14:23.071 04:11:37 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:14:23.071 04:11:37 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:14:23.071 04:11:37 -- common/autotest_common.sh@10 -- # set +x 00:14:23.071 ************************************ 00:14:23.071 START TEST nvmf_connect_disconnect 00:14:23.071 ************************************ 00:14:23.071 04:11:37 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:14:23.331 * Looking for test storage... 00:14:23.332 * Found test storage at /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target 00:14:23.332 04:11:37 -- target/connect_disconnect.sh@9 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/common.sh 00:14:23.332 04:11:37 -- nvmf/common.sh@7 -- # uname -s 00:14:23.332 04:11:37 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:23.332 04:11:37 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:23.332 04:11:37 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:23.332 04:11:37 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:23.332 04:11:37 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:23.332 04:11:37 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:23.332 04:11:37 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:23.332 04:11:37 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:23.332 04:11:37 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:23.332 04:11:37 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:23.332 04:11:37 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80ef6226-405e-ee11-906e-a4bf01973fda 00:14:23.332 04:11:37 -- nvmf/common.sh@18 -- # NVME_HOSTID=80ef6226-405e-ee11-906e-a4bf01973fda 00:14:23.332 04:11:37 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:23.332 04:11:37 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:23.332 04:11:37 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:14:23.332 04:11:37 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/common.sh 00:14:23.332 04:11:37 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:23.332 04:11:37 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:23.332 04:11:37 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:23.332 04:11:37 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:23.332 04:11:37 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:23.332 04:11:37 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:23.332 04:11:37 -- paths/export.sh@5 -- # export PATH 00:14:23.332 04:11:37 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:23.332 04:11:37 -- nvmf/common.sh@46 -- # : 0 00:14:23.332 04:11:37 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:14:23.332 04:11:37 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:14:23.332 04:11:37 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:14:23.332 04:11:37 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:23.332 04:11:37 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:23.332 04:11:37 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:14:23.332 04:11:37 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:14:23.332 04:11:37 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:14:23.332 04:11:37 -- target/connect_disconnect.sh@11 -- # MALLOC_BDEV_SIZE=64 00:14:23.332 04:11:37 -- target/connect_disconnect.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:14:23.332 04:11:37 -- target/connect_disconnect.sh@15 -- # nvmftestinit 00:14:23.332 04:11:37 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:14:23.332 04:11:37 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:23.332 04:11:37 -- nvmf/common.sh@436 -- # prepare_net_devs 00:14:23.332 04:11:37 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:14:23.332 04:11:37 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:14:23.332 04:11:37 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:23.332 04:11:37 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:23.332 04:11:37 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:23.332 04:11:37 -- nvmf/common.sh@402 -- # [[ phy-fallback != virt ]] 00:14:23.332 04:11:37 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:14:23.332 04:11:37 -- nvmf/common.sh@284 -- # xtrace_disable 00:14:23.332 04:11:37 -- common/autotest_common.sh@10 -- # set +x 00:14:29.955 04:11:43 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:14:29.955 04:11:43 -- nvmf/common.sh@290 -- # pci_devs=() 00:14:29.955 04:11:43 -- nvmf/common.sh@290 -- # local -a pci_devs 00:14:29.955 04:11:43 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:14:29.955 04:11:43 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:14:29.955 04:11:43 -- nvmf/common.sh@292 -- # pci_drivers=() 00:14:29.955 04:11:43 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:14:29.955 04:11:43 -- nvmf/common.sh@294 -- # net_devs=() 00:14:29.955 04:11:43 -- nvmf/common.sh@294 -- # local -ga net_devs 00:14:29.955 04:11:43 -- nvmf/common.sh@295 -- # e810=() 00:14:29.955 04:11:43 -- nvmf/common.sh@295 -- # local -ga e810 00:14:29.955 04:11:43 -- nvmf/common.sh@296 -- # x722=() 00:14:29.955 04:11:43 -- nvmf/common.sh@296 -- # local -ga x722 00:14:29.955 04:11:43 -- nvmf/common.sh@297 -- # mlx=() 00:14:29.955 04:11:43 -- nvmf/common.sh@297 -- # local -ga mlx 00:14:29.955 04:11:43 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:29.955 04:11:43 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:29.955 04:11:43 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:29.955 04:11:43 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:29.955 04:11:43 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:29.955 04:11:43 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:29.955 04:11:43 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:29.955 04:11:43 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:29.955 04:11:43 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:29.955 04:11:43 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:29.955 04:11:43 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:29.955 04:11:43 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:14:29.955 04:11:43 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:14:29.955 04:11:43 -- nvmf/common.sh@326 -- # [[ '' == mlx5 ]] 00:14:29.955 04:11:43 -- nvmf/common.sh@328 -- # [[ '' == e810 ]] 00:14:29.955 04:11:43 -- nvmf/common.sh@330 -- # [[ '' == x722 ]] 00:14:29.955 04:11:43 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:14:29.955 04:11:43 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:14:29.955 04:11:43 -- nvmf/common.sh@340 -- # echo 'Found 0000:27:00.0 (0x8086 - 0x159b)' 00:14:29.955 Found 0000:27:00.0 (0x8086 - 0x159b) 00:14:29.955 04:11:43 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:14:29.955 04:11:43 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:14:29.955 04:11:43 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:29.955 04:11:43 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:29.955 04:11:43 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:14:29.955 04:11:43 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:14:29.955 04:11:43 -- nvmf/common.sh@340 -- # echo 'Found 0000:27:00.1 (0x8086 - 0x159b)' 00:14:29.955 Found 0000:27:00.1 (0x8086 - 0x159b) 00:14:29.955 04:11:43 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:14:29.955 04:11:43 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:14:29.955 04:11:43 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:29.955 04:11:43 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:29.955 04:11:43 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:14:29.955 04:11:43 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:14:29.955 04:11:43 -- nvmf/common.sh@371 -- # [[ '' == e810 ]] 00:14:29.955 04:11:43 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:14:29.955 04:11:43 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:29.955 04:11:43 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:14:29.955 04:11:43 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:29.955 04:11:43 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:27:00.0: cvl_0_0' 00:14:29.955 Found net devices under 0000:27:00.0: cvl_0_0 00:14:29.955 04:11:43 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:14:29.955 04:11:43 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:14:29.955 04:11:43 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:29.955 04:11:43 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:14:29.955 04:11:43 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:29.955 04:11:43 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:27:00.1: cvl_0_1' 00:14:29.956 Found net devices under 0000:27:00.1: cvl_0_1 00:14:29.956 04:11:43 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:14:29.956 04:11:43 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:14:29.956 04:11:43 -- nvmf/common.sh@402 -- # is_hw=yes 00:14:29.956 04:11:43 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:14:29.956 04:11:43 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:14:29.956 04:11:43 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:14:29.956 04:11:43 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:29.956 04:11:43 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:29.956 04:11:43 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:29.956 04:11:43 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:14:29.956 04:11:43 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:29.956 04:11:43 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:29.956 04:11:43 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:14:29.956 04:11:43 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:29.956 04:11:43 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:29.956 04:11:43 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:14:29.956 04:11:43 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:14:29.956 04:11:43 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:14:29.956 04:11:43 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:29.956 04:11:43 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:29.956 04:11:43 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:29.956 04:11:43 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:14:29.956 04:11:43 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:29.956 04:11:43 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:29.956 04:11:43 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:29.956 04:11:43 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:14:29.956 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:29.956 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.638 ms 00:14:29.956 00:14:29.956 --- 10.0.0.2 ping statistics --- 00:14:29.956 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:29.956 rtt min/avg/max/mdev = 0.638/0.638/0.638/0.000 ms 00:14:29.956 04:11:43 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:29.956 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:29.956 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.349 ms 00:14:29.956 00:14:29.956 --- 10.0.0.1 ping statistics --- 00:14:29.956 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:29.956 rtt min/avg/max/mdev = 0.349/0.349/0.349/0.000 ms 00:14:29.956 04:11:43 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:29.956 04:11:43 -- nvmf/common.sh@410 -- # return 0 00:14:29.956 04:11:43 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:14:29.956 04:11:43 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:29.956 04:11:43 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:14:29.956 04:11:43 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:14:29.956 04:11:43 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:29.956 04:11:43 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:14:29.956 04:11:43 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:14:29.956 04:11:43 -- target/connect_disconnect.sh@16 -- # nvmfappstart -m 0xF 00:14:29.956 04:11:43 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:14:29.956 04:11:43 -- common/autotest_common.sh@712 -- # xtrace_disable 00:14:29.956 04:11:43 -- common/autotest_common.sh@10 -- # set +x 00:14:29.956 04:11:43 -- nvmf/common.sh@469 -- # nvmfpid=3926762 00:14:29.956 04:11:43 -- nvmf/common.sh@470 -- # waitforlisten 3926762 00:14:29.956 04:11:43 -- common/autotest_common.sh@819 -- # '[' -z 3926762 ']' 00:14:29.956 04:11:43 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:29.956 04:11:43 -- common/autotest_common.sh@824 -- # local max_retries=100 00:14:29.956 04:11:43 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:29.956 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:29.956 04:11:43 -- common/autotest_common.sh@828 -- # xtrace_disable 00:14:29.956 04:11:43 -- common/autotest_common.sh@10 -- # set +x 00:14:29.956 04:11:43 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:14:29.956 [2024-05-14 04:11:43.985539] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:14:29.956 [2024-05-14 04:11:43.985653] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:29.956 EAL: No free 2048 kB hugepages reported on node 1 00:14:29.956 [2024-05-14 04:11:44.112255] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:29.956 [2024-05-14 04:11:44.212093] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:14:29.956 [2024-05-14 04:11:44.212277] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:29.956 [2024-05-14 04:11:44.212291] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:29.956 [2024-05-14 04:11:44.212301] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:29.956 [2024-05-14 04:11:44.212380] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:29.956 [2024-05-14 04:11:44.212416] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:14:29.956 [2024-05-14 04:11:44.212521] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:29.956 [2024-05-14 04:11:44.212532] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:14:30.216 04:11:44 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:14:30.216 04:11:44 -- common/autotest_common.sh@852 -- # return 0 00:14:30.216 04:11:44 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:14:30.217 04:11:44 -- common/autotest_common.sh@718 -- # xtrace_disable 00:14:30.217 04:11:44 -- common/autotest_common.sh@10 -- # set +x 00:14:30.217 04:11:44 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:30.217 04:11:44 -- target/connect_disconnect.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:14:30.217 04:11:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:30.217 04:11:44 -- common/autotest_common.sh@10 -- # set +x 00:14:30.217 [2024-05-14 04:11:44.742683] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:30.217 04:11:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:30.217 04:11:44 -- target/connect_disconnect.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 00:14:30.217 04:11:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:30.217 04:11:44 -- common/autotest_common.sh@10 -- # set +x 00:14:30.217 04:11:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:30.217 04:11:44 -- target/connect_disconnect.sh@20 -- # bdev=Malloc0 00:14:30.217 04:11:44 -- target/connect_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:14:30.217 04:11:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:30.217 04:11:44 -- common/autotest_common.sh@10 -- # set +x 00:14:30.217 04:11:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:30.217 04:11:44 -- target/connect_disconnect.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:14:30.217 04:11:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:30.217 04:11:44 -- common/autotest_common.sh@10 -- # set +x 00:14:30.478 04:11:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:30.478 04:11:44 -- target/connect_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:30.478 04:11:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:30.478 04:11:44 -- common/autotest_common.sh@10 -- # set +x 00:14:30.478 [2024-05-14 04:11:44.811193] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:30.478 04:11:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:30.478 04:11:44 -- target/connect_disconnect.sh@26 -- # '[' 1 -eq 1 ']' 00:14:30.478 04:11:44 -- target/connect_disconnect.sh@27 -- # num_iterations=100 00:14:30.478 04:11:44 -- target/connect_disconnect.sh@29 -- # NVME_CONNECT='nvme connect -i 8' 00:14:30.478 04:11:44 -- target/connect_disconnect.sh@34 -- # set +x 00:14:33.021 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:34.932 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:37.468 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:40.021 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:41.924 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:44.457 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:46.993 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:48.894 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:51.473 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:54.011 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:55.919 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:58.450 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:00.358 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:02.897 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:04.797 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:07.333 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:09.239 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:11.847 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:13.750 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:16.289 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:18.823 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:20.731 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:23.267 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:25.806 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:27.719 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:29.625 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:32.158 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:34.759 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:36.670 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:39.204 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:41.738 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:43.643 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:46.179 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:48.086 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:50.622 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:53.158 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:55.067 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:57.632 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:59.535 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:02.071 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:04.608 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:06.632 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:09.171 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:11.075 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:13.607 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:15.512 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:18.074 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:20.610 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:22.516 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:25.052 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:27.585 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:29.490 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:32.032 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:33.938 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:36.474 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:38.380 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:40.988 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:43.523 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:45.429 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:47.963 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:49.868 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:52.408 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:54.945 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:57.484 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:59.390 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:01.964 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:03.901 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:06.437 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:08.341 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:10.875 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:12.776 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:15.306 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:17.837 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:19.739 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:22.274 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:24.807 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:26.712 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:29.284 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:31.821 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:33.729 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:36.264 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:38.801 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:40.703 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:43.236 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:45.144 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:47.679 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:50.218 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:52.130 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:54.747 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:56.652 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:59.188 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:01.089 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:03.625 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:06.159 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:08.063 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:10.602 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:13.139 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:15.045 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:17.584 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:19.489 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:19.489 04:15:34 -- target/connect_disconnect.sh@43 -- # trap - SIGINT SIGTERM EXIT 00:18:19.489 04:15:34 -- target/connect_disconnect.sh@45 -- # nvmftestfini 00:18:19.489 04:15:34 -- nvmf/common.sh@476 -- # nvmfcleanup 00:18:19.489 04:15:34 -- nvmf/common.sh@116 -- # sync 00:18:19.489 04:15:34 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:18:19.489 04:15:34 -- nvmf/common.sh@119 -- # set +e 00:18:19.489 04:15:34 -- nvmf/common.sh@120 -- # for i in {1..20} 00:18:19.489 04:15:34 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:18:19.748 rmmod nvme_tcp 00:18:19.748 rmmod nvme_fabrics 00:18:19.748 rmmod nvme_keyring 00:18:19.748 04:15:34 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:18:19.748 04:15:34 -- nvmf/common.sh@123 -- # set -e 00:18:19.748 04:15:34 -- nvmf/common.sh@124 -- # return 0 00:18:19.748 04:15:34 -- nvmf/common.sh@477 -- # '[' -n 3926762 ']' 00:18:19.748 04:15:34 -- nvmf/common.sh@478 -- # killprocess 3926762 00:18:19.748 04:15:34 -- common/autotest_common.sh@926 -- # '[' -z 3926762 ']' 00:18:19.748 04:15:34 -- common/autotest_common.sh@930 -- # kill -0 3926762 00:18:19.748 04:15:34 -- common/autotest_common.sh@931 -- # uname 00:18:19.748 04:15:34 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:18:19.748 04:15:34 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3926762 00:18:19.748 04:15:34 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:18:19.748 04:15:34 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:18:19.748 04:15:34 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3926762' 00:18:19.748 killing process with pid 3926762 00:18:19.748 04:15:34 -- common/autotest_common.sh@945 -- # kill 3926762 00:18:19.748 04:15:34 -- common/autotest_common.sh@950 -- # wait 3926762 00:18:20.392 04:15:34 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:18:20.392 04:15:34 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:18:20.392 04:15:34 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:18:20.392 04:15:34 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:20.392 04:15:34 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:18:20.392 04:15:34 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:20.392 04:15:34 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:20.392 04:15:34 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:22.302 04:15:36 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:18:22.302 00:18:22.302 real 3m59.132s 00:18:22.302 user 15m16.543s 00:18:22.302 sys 0m14.100s 00:18:22.302 04:15:36 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:22.302 04:15:36 -- common/autotest_common.sh@10 -- # set +x 00:18:22.302 ************************************ 00:18:22.302 END TEST nvmf_connect_disconnect 00:18:22.302 ************************************ 00:18:22.302 04:15:36 -- nvmf/nvmf.sh@28 -- # run_test nvmf_multitarget /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:18:22.302 04:15:36 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:18:22.302 04:15:36 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:18:22.302 04:15:36 -- common/autotest_common.sh@10 -- # set +x 00:18:22.302 ************************************ 00:18:22.302 START TEST nvmf_multitarget 00:18:22.302 ************************************ 00:18:22.302 04:15:36 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:18:22.562 * Looking for test storage... 00:18:22.562 * Found test storage at /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target 00:18:22.562 04:15:36 -- target/multitarget.sh@9 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/common.sh 00:18:22.562 04:15:36 -- nvmf/common.sh@7 -- # uname -s 00:18:22.562 04:15:36 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:22.562 04:15:36 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:22.562 04:15:36 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:22.562 04:15:36 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:22.562 04:15:36 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:22.562 04:15:36 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:22.562 04:15:36 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:22.562 04:15:36 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:22.562 04:15:36 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:22.562 04:15:36 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:22.562 04:15:36 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80ef6226-405e-ee11-906e-a4bf01973fda 00:18:22.562 04:15:36 -- nvmf/common.sh@18 -- # NVME_HOSTID=80ef6226-405e-ee11-906e-a4bf01973fda 00:18:22.562 04:15:36 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:22.562 04:15:36 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:22.562 04:15:36 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:18:22.562 04:15:36 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/common.sh 00:18:22.562 04:15:36 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:22.562 04:15:36 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:22.562 04:15:36 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:22.562 04:15:36 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:22.563 04:15:36 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:22.563 04:15:36 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:22.563 04:15:36 -- paths/export.sh@5 -- # export PATH 00:18:22.563 04:15:36 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:22.563 04:15:36 -- nvmf/common.sh@46 -- # : 0 00:18:22.563 04:15:36 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:18:22.563 04:15:36 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:18:22.563 04:15:36 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:18:22.563 04:15:36 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:22.563 04:15:36 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:22.563 04:15:36 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:18:22.563 04:15:36 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:18:22.563 04:15:36 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:18:22.563 04:15:36 -- target/multitarget.sh@13 -- # rpc_py=/var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:18:22.563 04:15:36 -- target/multitarget.sh@15 -- # nvmftestinit 00:18:22.563 04:15:36 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:18:22.563 04:15:36 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:22.563 04:15:36 -- nvmf/common.sh@436 -- # prepare_net_devs 00:18:22.563 04:15:36 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:18:22.563 04:15:36 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:18:22.563 04:15:36 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:22.563 04:15:36 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:22.563 04:15:36 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:22.563 04:15:36 -- nvmf/common.sh@402 -- # [[ phy-fallback != virt ]] 00:18:22.563 04:15:36 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:18:22.563 04:15:36 -- nvmf/common.sh@284 -- # xtrace_disable 00:18:22.563 04:15:36 -- common/autotest_common.sh@10 -- # set +x 00:18:29.136 04:15:42 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:18:29.136 04:15:42 -- nvmf/common.sh@290 -- # pci_devs=() 00:18:29.136 04:15:42 -- nvmf/common.sh@290 -- # local -a pci_devs 00:18:29.136 04:15:42 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:18:29.136 04:15:42 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:18:29.136 04:15:42 -- nvmf/common.sh@292 -- # pci_drivers=() 00:18:29.136 04:15:42 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:18:29.136 04:15:42 -- nvmf/common.sh@294 -- # net_devs=() 00:18:29.136 04:15:42 -- nvmf/common.sh@294 -- # local -ga net_devs 00:18:29.136 04:15:42 -- nvmf/common.sh@295 -- # e810=() 00:18:29.136 04:15:42 -- nvmf/common.sh@295 -- # local -ga e810 00:18:29.136 04:15:42 -- nvmf/common.sh@296 -- # x722=() 00:18:29.136 04:15:42 -- nvmf/common.sh@296 -- # local -ga x722 00:18:29.136 04:15:42 -- nvmf/common.sh@297 -- # mlx=() 00:18:29.136 04:15:42 -- nvmf/common.sh@297 -- # local -ga mlx 00:18:29.136 04:15:42 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:29.136 04:15:42 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:29.136 04:15:42 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:29.136 04:15:42 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:29.136 04:15:42 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:29.136 04:15:42 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:29.136 04:15:42 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:29.136 04:15:42 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:29.136 04:15:42 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:29.136 04:15:42 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:29.136 04:15:42 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:29.136 04:15:42 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:18:29.136 04:15:42 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:18:29.136 04:15:42 -- nvmf/common.sh@326 -- # [[ '' == mlx5 ]] 00:18:29.136 04:15:42 -- nvmf/common.sh@328 -- # [[ '' == e810 ]] 00:18:29.136 04:15:42 -- nvmf/common.sh@330 -- # [[ '' == x722 ]] 00:18:29.136 04:15:42 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:18:29.136 04:15:42 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:18:29.136 04:15:42 -- nvmf/common.sh@340 -- # echo 'Found 0000:27:00.0 (0x8086 - 0x159b)' 00:18:29.136 Found 0000:27:00.0 (0x8086 - 0x159b) 00:18:29.136 04:15:42 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:18:29.136 04:15:42 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:18:29.136 04:15:42 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:29.136 04:15:42 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:29.136 04:15:42 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:18:29.136 04:15:42 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:18:29.136 04:15:42 -- nvmf/common.sh@340 -- # echo 'Found 0000:27:00.1 (0x8086 - 0x159b)' 00:18:29.136 Found 0000:27:00.1 (0x8086 - 0x159b) 00:18:29.136 04:15:42 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:18:29.136 04:15:42 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:18:29.136 04:15:42 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:29.136 04:15:42 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:29.136 04:15:42 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:18:29.136 04:15:42 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:18:29.136 04:15:42 -- nvmf/common.sh@371 -- # [[ '' == e810 ]] 00:18:29.136 04:15:42 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:18:29.136 04:15:42 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:29.136 04:15:42 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:18:29.136 04:15:42 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:29.136 04:15:42 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:27:00.0: cvl_0_0' 00:18:29.136 Found net devices under 0000:27:00.0: cvl_0_0 00:18:29.136 04:15:42 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:18:29.136 04:15:42 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:18:29.136 04:15:42 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:29.136 04:15:42 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:18:29.136 04:15:42 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:29.136 04:15:42 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:27:00.1: cvl_0_1' 00:18:29.136 Found net devices under 0000:27:00.1: cvl_0_1 00:18:29.136 04:15:42 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:18:29.137 04:15:42 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:18:29.137 04:15:42 -- nvmf/common.sh@402 -- # is_hw=yes 00:18:29.137 04:15:42 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:18:29.137 04:15:42 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:18:29.137 04:15:42 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:18:29.137 04:15:42 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:29.137 04:15:42 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:29.137 04:15:42 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:29.137 04:15:42 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:18:29.137 04:15:42 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:29.137 04:15:42 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:29.137 04:15:42 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:18:29.137 04:15:42 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:29.137 04:15:42 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:29.137 04:15:42 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:18:29.137 04:15:42 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:18:29.137 04:15:42 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:18:29.137 04:15:42 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:29.137 04:15:42 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:29.137 04:15:42 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:29.137 04:15:42 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:18:29.137 04:15:42 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:29.137 04:15:42 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:29.137 04:15:42 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:29.137 04:15:42 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:18:29.137 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:29.137 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.404 ms 00:18:29.137 00:18:29.137 --- 10.0.0.2 ping statistics --- 00:18:29.137 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:29.137 rtt min/avg/max/mdev = 0.404/0.404/0.404/0.000 ms 00:18:29.137 04:15:42 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:29.137 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:29.137 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.114 ms 00:18:29.137 00:18:29.137 --- 10.0.0.1 ping statistics --- 00:18:29.137 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:29.137 rtt min/avg/max/mdev = 0.114/0.114/0.114/0.000 ms 00:18:29.137 04:15:42 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:29.137 04:15:42 -- nvmf/common.sh@410 -- # return 0 00:18:29.137 04:15:42 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:18:29.137 04:15:42 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:29.137 04:15:42 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:18:29.137 04:15:42 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:18:29.137 04:15:42 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:29.137 04:15:42 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:18:29.137 04:15:42 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:18:29.137 04:15:42 -- target/multitarget.sh@16 -- # nvmfappstart -m 0xF 00:18:29.137 04:15:42 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:18:29.137 04:15:42 -- common/autotest_common.sh@712 -- # xtrace_disable 00:18:29.137 04:15:42 -- common/autotest_common.sh@10 -- # set +x 00:18:29.137 04:15:42 -- nvmf/common.sh@469 -- # nvmfpid=3977562 00:18:29.137 04:15:42 -- nvmf/common.sh@470 -- # waitforlisten 3977562 00:18:29.137 04:15:42 -- common/autotest_common.sh@819 -- # '[' -z 3977562 ']' 00:18:29.137 04:15:42 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:29.137 04:15:42 -- common/autotest_common.sh@824 -- # local max_retries=100 00:18:29.137 04:15:42 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:29.137 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:29.137 04:15:42 -- common/autotest_common.sh@828 -- # xtrace_disable 00:18:29.137 04:15:42 -- common/autotest_common.sh@10 -- # set +x 00:18:29.137 04:15:42 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:18:29.137 [2024-05-14 04:15:43.047300] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:18:29.137 [2024-05-14 04:15:43.047429] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:29.137 EAL: No free 2048 kB hugepages reported on node 1 00:18:29.137 [2024-05-14 04:15:43.184085] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:18:29.137 [2024-05-14 04:15:43.279332] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:18:29.137 [2024-05-14 04:15:43.279527] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:29.137 [2024-05-14 04:15:43.279541] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:29.137 [2024-05-14 04:15:43.279552] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:29.137 [2024-05-14 04:15:43.279618] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:18:29.137 [2024-05-14 04:15:43.279652] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:18:29.137 [2024-05-14 04:15:43.279757] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:29.137 [2024-05-14 04:15:43.279767] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:18:29.396 04:15:43 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:18:29.396 04:15:43 -- common/autotest_common.sh@852 -- # return 0 00:18:29.396 04:15:43 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:18:29.396 04:15:43 -- common/autotest_common.sh@718 -- # xtrace_disable 00:18:29.396 04:15:43 -- common/autotest_common.sh@10 -- # set +x 00:18:29.396 04:15:43 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:29.396 04:15:43 -- target/multitarget.sh@18 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:18:29.396 04:15:43 -- target/multitarget.sh@21 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:18:29.396 04:15:43 -- target/multitarget.sh@21 -- # jq length 00:18:29.396 04:15:43 -- target/multitarget.sh@21 -- # '[' 1 '!=' 1 ']' 00:18:29.396 04:15:43 -- target/multitarget.sh@25 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_1 -s 32 00:18:29.396 "nvmf_tgt_1" 00:18:29.396 04:15:43 -- target/multitarget.sh@26 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_2 -s 32 00:18:29.655 "nvmf_tgt_2" 00:18:29.655 04:15:44 -- target/multitarget.sh@28 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:18:29.655 04:15:44 -- target/multitarget.sh@28 -- # jq length 00:18:29.655 04:15:44 -- target/multitarget.sh@28 -- # '[' 3 '!=' 3 ']' 00:18:29.655 04:15:44 -- target/multitarget.sh@32 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_1 00:18:29.655 true 00:18:29.916 04:15:44 -- target/multitarget.sh@33 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_2 00:18:29.916 true 00:18:29.916 04:15:44 -- target/multitarget.sh@35 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:18:29.916 04:15:44 -- target/multitarget.sh@35 -- # jq length 00:18:29.916 04:15:44 -- target/multitarget.sh@35 -- # '[' 1 '!=' 1 ']' 00:18:29.916 04:15:44 -- target/multitarget.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:18:29.916 04:15:44 -- target/multitarget.sh@41 -- # nvmftestfini 00:18:29.916 04:15:44 -- nvmf/common.sh@476 -- # nvmfcleanup 00:18:29.916 04:15:44 -- nvmf/common.sh@116 -- # sync 00:18:29.916 04:15:44 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:18:29.916 04:15:44 -- nvmf/common.sh@119 -- # set +e 00:18:29.916 04:15:44 -- nvmf/common.sh@120 -- # for i in {1..20} 00:18:29.916 04:15:44 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:18:29.916 rmmod nvme_tcp 00:18:29.916 rmmod nvme_fabrics 00:18:29.916 rmmod nvme_keyring 00:18:29.916 04:15:44 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:18:29.916 04:15:44 -- nvmf/common.sh@123 -- # set -e 00:18:29.916 04:15:44 -- nvmf/common.sh@124 -- # return 0 00:18:29.916 04:15:44 -- nvmf/common.sh@477 -- # '[' -n 3977562 ']' 00:18:29.916 04:15:44 -- nvmf/common.sh@478 -- # killprocess 3977562 00:18:29.916 04:15:44 -- common/autotest_common.sh@926 -- # '[' -z 3977562 ']' 00:18:29.916 04:15:44 -- common/autotest_common.sh@930 -- # kill -0 3977562 00:18:29.916 04:15:44 -- common/autotest_common.sh@931 -- # uname 00:18:29.916 04:15:44 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:18:29.916 04:15:44 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3977562 00:18:30.176 04:15:44 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:18:30.176 04:15:44 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:18:30.176 04:15:44 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3977562' 00:18:30.176 killing process with pid 3977562 00:18:30.176 04:15:44 -- common/autotest_common.sh@945 -- # kill 3977562 00:18:30.176 04:15:44 -- common/autotest_common.sh@950 -- # wait 3977562 00:18:30.437 04:15:44 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:18:30.437 04:15:45 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:18:30.437 04:15:45 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:18:30.437 04:15:45 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:30.437 04:15:45 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:18:30.437 04:15:45 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:30.437 04:15:45 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:30.437 04:15:45 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:32.973 04:15:47 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:18:32.973 00:18:32.973 real 0m10.247s 00:18:32.973 user 0m8.738s 00:18:32.973 sys 0m5.098s 00:18:32.974 04:15:47 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:32.974 04:15:47 -- common/autotest_common.sh@10 -- # set +x 00:18:32.974 ************************************ 00:18:32.974 END TEST nvmf_multitarget 00:18:32.974 ************************************ 00:18:32.974 04:15:47 -- nvmf/nvmf.sh@29 -- # run_test nvmf_rpc /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:18:32.974 04:15:47 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:18:32.974 04:15:47 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:18:32.974 04:15:47 -- common/autotest_common.sh@10 -- # set +x 00:18:32.974 ************************************ 00:18:32.974 START TEST nvmf_rpc 00:18:32.974 ************************************ 00:18:32.974 04:15:47 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:18:32.974 * Looking for test storage... 00:18:32.974 * Found test storage at /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target 00:18:32.974 04:15:47 -- target/rpc.sh@9 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/common.sh 00:18:32.974 04:15:47 -- nvmf/common.sh@7 -- # uname -s 00:18:32.974 04:15:47 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:32.974 04:15:47 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:32.974 04:15:47 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:32.974 04:15:47 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:32.974 04:15:47 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:32.974 04:15:47 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:32.974 04:15:47 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:32.974 04:15:47 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:32.974 04:15:47 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:32.974 04:15:47 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:32.974 04:15:47 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80ef6226-405e-ee11-906e-a4bf01973fda 00:18:32.974 04:15:47 -- nvmf/common.sh@18 -- # NVME_HOSTID=80ef6226-405e-ee11-906e-a4bf01973fda 00:18:32.974 04:15:47 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:32.974 04:15:47 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:32.974 04:15:47 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:18:32.974 04:15:47 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/common.sh 00:18:32.974 04:15:47 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:32.974 04:15:47 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:32.974 04:15:47 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:32.974 04:15:47 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:32.974 04:15:47 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:32.974 04:15:47 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:32.974 04:15:47 -- paths/export.sh@5 -- # export PATH 00:18:32.974 04:15:47 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:32.974 04:15:47 -- nvmf/common.sh@46 -- # : 0 00:18:32.974 04:15:47 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:18:32.974 04:15:47 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:18:32.974 04:15:47 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:18:32.974 04:15:47 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:32.974 04:15:47 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:32.974 04:15:47 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:18:32.974 04:15:47 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:18:32.974 04:15:47 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:18:32.974 04:15:47 -- target/rpc.sh@11 -- # loops=5 00:18:32.974 04:15:47 -- target/rpc.sh@23 -- # nvmftestinit 00:18:32.974 04:15:47 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:18:32.974 04:15:47 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:32.974 04:15:47 -- nvmf/common.sh@436 -- # prepare_net_devs 00:18:32.974 04:15:47 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:18:32.974 04:15:47 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:18:32.974 04:15:47 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:32.974 04:15:47 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:32.974 04:15:47 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:32.974 04:15:47 -- nvmf/common.sh@402 -- # [[ phy-fallback != virt ]] 00:18:32.974 04:15:47 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:18:32.974 04:15:47 -- nvmf/common.sh@284 -- # xtrace_disable 00:18:32.974 04:15:47 -- common/autotest_common.sh@10 -- # set +x 00:18:38.249 04:15:52 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:18:38.249 04:15:52 -- nvmf/common.sh@290 -- # pci_devs=() 00:18:38.249 04:15:52 -- nvmf/common.sh@290 -- # local -a pci_devs 00:18:38.249 04:15:52 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:18:38.249 04:15:52 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:18:38.249 04:15:52 -- nvmf/common.sh@292 -- # pci_drivers=() 00:18:38.249 04:15:52 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:18:38.249 04:15:52 -- nvmf/common.sh@294 -- # net_devs=() 00:18:38.249 04:15:52 -- nvmf/common.sh@294 -- # local -ga net_devs 00:18:38.249 04:15:52 -- nvmf/common.sh@295 -- # e810=() 00:18:38.249 04:15:52 -- nvmf/common.sh@295 -- # local -ga e810 00:18:38.249 04:15:52 -- nvmf/common.sh@296 -- # x722=() 00:18:38.249 04:15:52 -- nvmf/common.sh@296 -- # local -ga x722 00:18:38.249 04:15:52 -- nvmf/common.sh@297 -- # mlx=() 00:18:38.249 04:15:52 -- nvmf/common.sh@297 -- # local -ga mlx 00:18:38.249 04:15:52 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:38.249 04:15:52 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:38.249 04:15:52 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:38.249 04:15:52 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:38.249 04:15:52 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:38.249 04:15:52 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:38.249 04:15:52 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:38.249 04:15:52 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:38.249 04:15:52 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:38.249 04:15:52 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:38.249 04:15:52 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:38.249 04:15:52 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:18:38.249 04:15:52 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:18:38.249 04:15:52 -- nvmf/common.sh@326 -- # [[ '' == mlx5 ]] 00:18:38.249 04:15:52 -- nvmf/common.sh@328 -- # [[ '' == e810 ]] 00:18:38.249 04:15:52 -- nvmf/common.sh@330 -- # [[ '' == x722 ]] 00:18:38.249 04:15:52 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:18:38.249 04:15:52 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:18:38.249 04:15:52 -- nvmf/common.sh@340 -- # echo 'Found 0000:27:00.0 (0x8086 - 0x159b)' 00:18:38.249 Found 0000:27:00.0 (0x8086 - 0x159b) 00:18:38.249 04:15:52 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:18:38.249 04:15:52 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:18:38.249 04:15:52 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:38.249 04:15:52 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:38.249 04:15:52 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:18:38.249 04:15:52 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:18:38.249 04:15:52 -- nvmf/common.sh@340 -- # echo 'Found 0000:27:00.1 (0x8086 - 0x159b)' 00:18:38.249 Found 0000:27:00.1 (0x8086 - 0x159b) 00:18:38.249 04:15:52 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:18:38.249 04:15:52 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:18:38.249 04:15:52 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:38.249 04:15:52 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:38.249 04:15:52 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:18:38.249 04:15:52 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:18:38.249 04:15:52 -- nvmf/common.sh@371 -- # [[ '' == e810 ]] 00:18:38.249 04:15:52 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:18:38.249 04:15:52 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:38.249 04:15:52 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:18:38.249 04:15:52 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:38.249 04:15:52 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:27:00.0: cvl_0_0' 00:18:38.249 Found net devices under 0000:27:00.0: cvl_0_0 00:18:38.249 04:15:52 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:18:38.249 04:15:52 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:18:38.249 04:15:52 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:38.249 04:15:52 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:18:38.249 04:15:52 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:38.249 04:15:52 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:27:00.1: cvl_0_1' 00:18:38.249 Found net devices under 0000:27:00.1: cvl_0_1 00:18:38.249 04:15:52 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:18:38.249 04:15:52 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:18:38.249 04:15:52 -- nvmf/common.sh@402 -- # is_hw=yes 00:18:38.249 04:15:52 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:18:38.249 04:15:52 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:18:38.249 04:15:52 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:18:38.249 04:15:52 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:38.249 04:15:52 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:38.249 04:15:52 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:38.249 04:15:52 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:18:38.249 04:15:52 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:38.249 04:15:52 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:38.249 04:15:52 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:18:38.249 04:15:52 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:38.249 04:15:52 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:38.249 04:15:52 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:18:38.249 04:15:52 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:18:38.249 04:15:52 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:18:38.249 04:15:52 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:38.249 04:15:52 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:38.249 04:15:52 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:38.249 04:15:52 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:18:38.249 04:15:52 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:38.249 04:15:52 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:38.249 04:15:52 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:38.249 04:15:52 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:18:38.249 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:38.249 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.353 ms 00:18:38.249 00:18:38.249 --- 10.0.0.2 ping statistics --- 00:18:38.249 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:38.249 rtt min/avg/max/mdev = 0.353/0.353/0.353/0.000 ms 00:18:38.250 04:15:52 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:38.250 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:38.250 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.078 ms 00:18:38.250 00:18:38.250 --- 10.0.0.1 ping statistics --- 00:18:38.250 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:38.250 rtt min/avg/max/mdev = 0.078/0.078/0.078/0.000 ms 00:18:38.250 04:15:52 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:38.250 04:15:52 -- nvmf/common.sh@410 -- # return 0 00:18:38.250 04:15:52 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:18:38.250 04:15:52 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:38.250 04:15:52 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:18:38.250 04:15:52 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:18:38.250 04:15:52 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:38.250 04:15:52 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:18:38.250 04:15:52 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:18:38.250 04:15:52 -- target/rpc.sh@24 -- # nvmfappstart -m 0xF 00:18:38.250 04:15:52 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:18:38.250 04:15:52 -- common/autotest_common.sh@712 -- # xtrace_disable 00:18:38.250 04:15:52 -- common/autotest_common.sh@10 -- # set +x 00:18:38.250 04:15:52 -- nvmf/common.sh@469 -- # nvmfpid=3981789 00:18:38.250 04:15:52 -- nvmf/common.sh@470 -- # waitforlisten 3981789 00:18:38.250 04:15:52 -- common/autotest_common.sh@819 -- # '[' -z 3981789 ']' 00:18:38.250 04:15:52 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:38.250 04:15:52 -- common/autotest_common.sh@824 -- # local max_retries=100 00:18:38.250 04:15:52 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:38.250 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:38.250 04:15:52 -- common/autotest_common.sh@828 -- # xtrace_disable 00:18:38.250 04:15:52 -- common/autotest_common.sh@10 -- # set +x 00:18:38.250 04:15:52 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:18:38.250 [2024-05-14 04:15:52.436962] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:18:38.250 [2024-05-14 04:15:52.437069] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:38.250 EAL: No free 2048 kB hugepages reported on node 1 00:18:38.250 [2024-05-14 04:15:52.567357] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:18:38.250 [2024-05-14 04:15:52.664160] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:18:38.250 [2024-05-14 04:15:52.664328] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:38.250 [2024-05-14 04:15:52.664342] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:38.250 [2024-05-14 04:15:52.664351] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:38.250 [2024-05-14 04:15:52.664433] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:18:38.250 [2024-05-14 04:15:52.664468] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:18:38.250 [2024-05-14 04:15:52.664485] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:38.250 [2024-05-14 04:15:52.664498] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:18:38.817 04:15:53 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:18:38.817 04:15:53 -- common/autotest_common.sh@852 -- # return 0 00:18:38.817 04:15:53 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:18:38.817 04:15:53 -- common/autotest_common.sh@718 -- # xtrace_disable 00:18:38.817 04:15:53 -- common/autotest_common.sh@10 -- # set +x 00:18:38.817 04:15:53 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:38.817 04:15:53 -- target/rpc.sh@26 -- # rpc_cmd nvmf_get_stats 00:18:38.817 04:15:53 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:38.817 04:15:53 -- common/autotest_common.sh@10 -- # set +x 00:18:38.817 04:15:53 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:38.817 04:15:53 -- target/rpc.sh@26 -- # stats='{ 00:18:38.817 "tick_rate": 1900000000, 00:18:38.817 "poll_groups": [ 00:18:38.817 { 00:18:38.817 "name": "nvmf_tgt_poll_group_0", 00:18:38.817 "admin_qpairs": 0, 00:18:38.817 "io_qpairs": 0, 00:18:38.817 "current_admin_qpairs": 0, 00:18:38.817 "current_io_qpairs": 0, 00:18:38.817 "pending_bdev_io": 0, 00:18:38.817 "completed_nvme_io": 0, 00:18:38.817 "transports": [] 00:18:38.817 }, 00:18:38.817 { 00:18:38.817 "name": "nvmf_tgt_poll_group_1", 00:18:38.817 "admin_qpairs": 0, 00:18:38.817 "io_qpairs": 0, 00:18:38.817 "current_admin_qpairs": 0, 00:18:38.817 "current_io_qpairs": 0, 00:18:38.817 "pending_bdev_io": 0, 00:18:38.817 "completed_nvme_io": 0, 00:18:38.817 "transports": [] 00:18:38.817 }, 00:18:38.817 { 00:18:38.817 "name": "nvmf_tgt_poll_group_2", 00:18:38.817 "admin_qpairs": 0, 00:18:38.817 "io_qpairs": 0, 00:18:38.817 "current_admin_qpairs": 0, 00:18:38.817 "current_io_qpairs": 0, 00:18:38.817 "pending_bdev_io": 0, 00:18:38.817 "completed_nvme_io": 0, 00:18:38.817 "transports": [] 00:18:38.817 }, 00:18:38.817 { 00:18:38.817 "name": "nvmf_tgt_poll_group_3", 00:18:38.817 "admin_qpairs": 0, 00:18:38.817 "io_qpairs": 0, 00:18:38.817 "current_admin_qpairs": 0, 00:18:38.817 "current_io_qpairs": 0, 00:18:38.817 "pending_bdev_io": 0, 00:18:38.817 "completed_nvme_io": 0, 00:18:38.817 "transports": [] 00:18:38.817 } 00:18:38.817 ] 00:18:38.817 }' 00:18:38.817 04:15:53 -- target/rpc.sh@28 -- # jcount '.poll_groups[].name' 00:18:38.817 04:15:53 -- target/rpc.sh@14 -- # local 'filter=.poll_groups[].name' 00:18:38.817 04:15:53 -- target/rpc.sh@15 -- # jq '.poll_groups[].name' 00:18:38.817 04:15:53 -- target/rpc.sh@15 -- # wc -l 00:18:38.817 04:15:53 -- target/rpc.sh@28 -- # (( 4 == 4 )) 00:18:38.817 04:15:53 -- target/rpc.sh@29 -- # jq '.poll_groups[0].transports[0]' 00:18:38.817 04:15:53 -- target/rpc.sh@29 -- # [[ null == null ]] 00:18:38.817 04:15:53 -- target/rpc.sh@31 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:18:38.817 04:15:53 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:38.817 04:15:53 -- common/autotest_common.sh@10 -- # set +x 00:18:38.817 [2024-05-14 04:15:53.265806] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:38.817 04:15:53 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:38.818 04:15:53 -- target/rpc.sh@33 -- # rpc_cmd nvmf_get_stats 00:18:38.818 04:15:53 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:38.818 04:15:53 -- common/autotest_common.sh@10 -- # set +x 00:18:38.818 04:15:53 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:38.818 04:15:53 -- target/rpc.sh@33 -- # stats='{ 00:18:38.818 "tick_rate": 1900000000, 00:18:38.818 "poll_groups": [ 00:18:38.818 { 00:18:38.818 "name": "nvmf_tgt_poll_group_0", 00:18:38.818 "admin_qpairs": 0, 00:18:38.818 "io_qpairs": 0, 00:18:38.818 "current_admin_qpairs": 0, 00:18:38.818 "current_io_qpairs": 0, 00:18:38.818 "pending_bdev_io": 0, 00:18:38.818 "completed_nvme_io": 0, 00:18:38.818 "transports": [ 00:18:38.818 { 00:18:38.818 "trtype": "TCP" 00:18:38.818 } 00:18:38.818 ] 00:18:38.818 }, 00:18:38.818 { 00:18:38.818 "name": "nvmf_tgt_poll_group_1", 00:18:38.818 "admin_qpairs": 0, 00:18:38.818 "io_qpairs": 0, 00:18:38.818 "current_admin_qpairs": 0, 00:18:38.818 "current_io_qpairs": 0, 00:18:38.818 "pending_bdev_io": 0, 00:18:38.818 "completed_nvme_io": 0, 00:18:38.818 "transports": [ 00:18:38.818 { 00:18:38.818 "trtype": "TCP" 00:18:38.818 } 00:18:38.818 ] 00:18:38.818 }, 00:18:38.818 { 00:18:38.818 "name": "nvmf_tgt_poll_group_2", 00:18:38.818 "admin_qpairs": 0, 00:18:38.818 "io_qpairs": 0, 00:18:38.818 "current_admin_qpairs": 0, 00:18:38.818 "current_io_qpairs": 0, 00:18:38.818 "pending_bdev_io": 0, 00:18:38.818 "completed_nvme_io": 0, 00:18:38.818 "transports": [ 00:18:38.818 { 00:18:38.818 "trtype": "TCP" 00:18:38.818 } 00:18:38.818 ] 00:18:38.818 }, 00:18:38.818 { 00:18:38.818 "name": "nvmf_tgt_poll_group_3", 00:18:38.818 "admin_qpairs": 0, 00:18:38.818 "io_qpairs": 0, 00:18:38.818 "current_admin_qpairs": 0, 00:18:38.818 "current_io_qpairs": 0, 00:18:38.818 "pending_bdev_io": 0, 00:18:38.818 "completed_nvme_io": 0, 00:18:38.818 "transports": [ 00:18:38.818 { 00:18:38.818 "trtype": "TCP" 00:18:38.818 } 00:18:38.818 ] 00:18:38.818 } 00:18:38.818 ] 00:18:38.818 }' 00:18:38.818 04:15:53 -- target/rpc.sh@35 -- # jsum '.poll_groups[].admin_qpairs' 00:18:38.818 04:15:53 -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:18:38.818 04:15:53 -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:18:38.818 04:15:53 -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:18:38.818 04:15:53 -- target/rpc.sh@35 -- # (( 0 == 0 )) 00:18:38.818 04:15:53 -- target/rpc.sh@36 -- # jsum '.poll_groups[].io_qpairs' 00:18:38.818 04:15:53 -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:18:38.818 04:15:53 -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:18:38.818 04:15:53 -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:18:38.818 04:15:53 -- target/rpc.sh@36 -- # (( 0 == 0 )) 00:18:38.818 04:15:53 -- target/rpc.sh@38 -- # '[' rdma == tcp ']' 00:18:38.818 04:15:53 -- target/rpc.sh@46 -- # MALLOC_BDEV_SIZE=64 00:18:38.818 04:15:53 -- target/rpc.sh@47 -- # MALLOC_BLOCK_SIZE=512 00:18:38.818 04:15:53 -- target/rpc.sh@49 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:18:38.818 04:15:53 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:38.818 04:15:53 -- common/autotest_common.sh@10 -- # set +x 00:18:38.818 Malloc1 00:18:38.818 04:15:53 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:38.818 04:15:53 -- target/rpc.sh@52 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:18:38.818 04:15:53 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:38.818 04:15:53 -- common/autotest_common.sh@10 -- # set +x 00:18:39.075 04:15:53 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:39.075 04:15:53 -- target/rpc.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:18:39.075 04:15:53 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:39.075 04:15:53 -- common/autotest_common.sh@10 -- # set +x 00:18:39.075 04:15:53 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:39.075 04:15:53 -- target/rpc.sh@54 -- # rpc_cmd nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1 00:18:39.075 04:15:53 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:39.075 04:15:53 -- common/autotest_common.sh@10 -- # set +x 00:18:39.075 04:15:53 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:39.075 04:15:53 -- target/rpc.sh@55 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:39.075 04:15:53 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:39.075 04:15:53 -- common/autotest_common.sh@10 -- # set +x 00:18:39.075 [2024-05-14 04:15:53.430998] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:39.075 04:15:53 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:39.075 04:15:53 -- target/rpc.sh@58 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80ef6226-405e-ee11-906e-a4bf01973fda --hostid=80ef6226-405e-ee11-906e-a4bf01973fda -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:80ef6226-405e-ee11-906e-a4bf01973fda -a 10.0.0.2 -s 4420 00:18:39.075 04:15:53 -- common/autotest_common.sh@640 -- # local es=0 00:18:39.075 04:15:53 -- common/autotest_common.sh@642 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80ef6226-405e-ee11-906e-a4bf01973fda --hostid=80ef6226-405e-ee11-906e-a4bf01973fda -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:80ef6226-405e-ee11-906e-a4bf01973fda -a 10.0.0.2 -s 4420 00:18:39.075 04:15:53 -- common/autotest_common.sh@628 -- # local arg=nvme 00:18:39.075 04:15:53 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:18:39.075 04:15:53 -- common/autotest_common.sh@632 -- # type -t nvme 00:18:39.075 04:15:53 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:18:39.075 04:15:53 -- common/autotest_common.sh@634 -- # type -P nvme 00:18:39.075 04:15:53 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:18:39.075 04:15:53 -- common/autotest_common.sh@634 -- # arg=/usr/sbin/nvme 00:18:39.075 04:15:53 -- common/autotest_common.sh@634 -- # [[ -x /usr/sbin/nvme ]] 00:18:39.075 04:15:53 -- common/autotest_common.sh@643 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80ef6226-405e-ee11-906e-a4bf01973fda --hostid=80ef6226-405e-ee11-906e-a4bf01973fda -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:80ef6226-405e-ee11-906e-a4bf01973fda -a 10.0.0.2 -s 4420 00:18:39.075 [2024-05-14 04:15:53.459794] ctrlr.c: 715:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:80ef6226-405e-ee11-906e-a4bf01973fda' 00:18:39.075 Failed to write to /dev/nvme-fabrics: Input/output error 00:18:39.075 could not add new controller: failed to write to nvme-fabrics device 00:18:39.075 04:15:53 -- common/autotest_common.sh@643 -- # es=1 00:18:39.075 04:15:53 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:18:39.075 04:15:53 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:18:39.075 04:15:53 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:18:39.075 04:15:53 -- target/rpc.sh@61 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:80ef6226-405e-ee11-906e-a4bf01973fda 00:18:39.075 04:15:53 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:39.075 04:15:53 -- common/autotest_common.sh@10 -- # set +x 00:18:39.075 04:15:53 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:39.075 04:15:53 -- target/rpc.sh@62 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80ef6226-405e-ee11-906e-a4bf01973fda --hostid=80ef6226-405e-ee11-906e-a4bf01973fda -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:18:40.455 04:15:54 -- target/rpc.sh@63 -- # waitforserial SPDKISFASTANDAWESOME 00:18:40.455 04:15:54 -- common/autotest_common.sh@1177 -- # local i=0 00:18:40.455 04:15:54 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:18:40.455 04:15:54 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:18:40.455 04:15:54 -- common/autotest_common.sh@1184 -- # sleep 2 00:18:42.363 04:15:56 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:18:42.622 04:15:56 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:18:42.622 04:15:56 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:18:42.622 04:15:56 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:18:42.622 04:15:56 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:18:42.622 04:15:56 -- common/autotest_common.sh@1187 -- # return 0 00:18:42.622 04:15:56 -- target/rpc.sh@64 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:18:42.622 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:42.622 04:15:57 -- target/rpc.sh@65 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:18:42.622 04:15:57 -- common/autotest_common.sh@1198 -- # local i=0 00:18:42.622 04:15:57 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:18:42.622 04:15:57 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:18:42.622 04:15:57 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:18:42.622 04:15:57 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:18:42.622 04:15:57 -- common/autotest_common.sh@1210 -- # return 0 00:18:42.622 04:15:57 -- target/rpc.sh@68 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:80ef6226-405e-ee11-906e-a4bf01973fda 00:18:42.622 04:15:57 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:42.622 04:15:57 -- common/autotest_common.sh@10 -- # set +x 00:18:42.622 04:15:57 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:42.622 04:15:57 -- target/rpc.sh@69 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80ef6226-405e-ee11-906e-a4bf01973fda --hostid=80ef6226-405e-ee11-906e-a4bf01973fda -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:18:42.622 04:15:57 -- common/autotest_common.sh@640 -- # local es=0 00:18:42.622 04:15:57 -- common/autotest_common.sh@642 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80ef6226-405e-ee11-906e-a4bf01973fda --hostid=80ef6226-405e-ee11-906e-a4bf01973fda -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:18:42.622 04:15:57 -- common/autotest_common.sh@628 -- # local arg=nvme 00:18:42.622 04:15:57 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:18:42.622 04:15:57 -- common/autotest_common.sh@632 -- # type -t nvme 00:18:42.622 04:15:57 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:18:42.622 04:15:57 -- common/autotest_common.sh@634 -- # type -P nvme 00:18:42.622 04:15:57 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:18:42.622 04:15:57 -- common/autotest_common.sh@634 -- # arg=/usr/sbin/nvme 00:18:42.622 04:15:57 -- common/autotest_common.sh@634 -- # [[ -x /usr/sbin/nvme ]] 00:18:42.622 04:15:57 -- common/autotest_common.sh@643 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80ef6226-405e-ee11-906e-a4bf01973fda --hostid=80ef6226-405e-ee11-906e-a4bf01973fda -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:18:42.622 [2024-05-14 04:15:57.158894] ctrlr.c: 715:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:80ef6226-405e-ee11-906e-a4bf01973fda' 00:18:42.622 Failed to write to /dev/nvme-fabrics: Input/output error 00:18:42.622 could not add new controller: failed to write to nvme-fabrics device 00:18:42.622 04:15:57 -- common/autotest_common.sh@643 -- # es=1 00:18:42.622 04:15:57 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:18:42.622 04:15:57 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:18:42.622 04:15:57 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:18:42.622 04:15:57 -- target/rpc.sh@72 -- # rpc_cmd nvmf_subsystem_allow_any_host -e nqn.2016-06.io.spdk:cnode1 00:18:42.622 04:15:57 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:42.622 04:15:57 -- common/autotest_common.sh@10 -- # set +x 00:18:42.622 04:15:57 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:42.622 04:15:57 -- target/rpc.sh@73 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80ef6226-405e-ee11-906e-a4bf01973fda --hostid=80ef6226-405e-ee11-906e-a4bf01973fda -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:18:44.535 04:15:58 -- target/rpc.sh@74 -- # waitforserial SPDKISFASTANDAWESOME 00:18:44.535 04:15:58 -- common/autotest_common.sh@1177 -- # local i=0 00:18:44.535 04:15:58 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:18:44.535 04:15:58 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:18:44.535 04:15:58 -- common/autotest_common.sh@1184 -- # sleep 2 00:18:46.441 04:16:00 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:18:46.441 04:16:00 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:18:46.441 04:16:00 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:18:46.441 04:16:00 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:18:46.441 04:16:00 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:18:46.441 04:16:00 -- common/autotest_common.sh@1187 -- # return 0 00:18:46.441 04:16:00 -- target/rpc.sh@75 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:18:46.441 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:46.441 04:16:00 -- target/rpc.sh@76 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:18:46.441 04:16:00 -- common/autotest_common.sh@1198 -- # local i=0 00:18:46.441 04:16:00 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:18:46.441 04:16:00 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:18:46.441 04:16:00 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:18:46.441 04:16:00 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:18:46.441 04:16:00 -- common/autotest_common.sh@1210 -- # return 0 00:18:46.441 04:16:00 -- target/rpc.sh@78 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:46.441 04:16:00 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:46.441 04:16:00 -- common/autotest_common.sh@10 -- # set +x 00:18:46.441 04:16:00 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:46.441 04:16:00 -- target/rpc.sh@81 -- # seq 1 5 00:18:46.441 04:16:00 -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:18:46.441 04:16:00 -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:18:46.441 04:16:00 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:46.441 04:16:00 -- common/autotest_common.sh@10 -- # set +x 00:18:46.441 04:16:00 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:46.441 04:16:00 -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:46.441 04:16:00 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:46.441 04:16:00 -- common/autotest_common.sh@10 -- # set +x 00:18:46.441 [2024-05-14 04:16:00.850349] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:46.441 04:16:00 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:46.441 04:16:00 -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:18:46.441 04:16:00 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:46.441 04:16:00 -- common/autotest_common.sh@10 -- # set +x 00:18:46.441 04:16:00 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:46.441 04:16:00 -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:18:46.441 04:16:00 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:46.441 04:16:00 -- common/autotest_common.sh@10 -- # set +x 00:18:46.441 04:16:00 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:46.441 04:16:00 -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80ef6226-405e-ee11-906e-a4bf01973fda --hostid=80ef6226-405e-ee11-906e-a4bf01973fda -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:18:47.856 04:16:02 -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:18:47.856 04:16:02 -- common/autotest_common.sh@1177 -- # local i=0 00:18:47.856 04:16:02 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:18:47.856 04:16:02 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:18:47.856 04:16:02 -- common/autotest_common.sh@1184 -- # sleep 2 00:18:49.765 04:16:04 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:18:49.765 04:16:04 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:18:49.765 04:16:04 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:18:50.026 04:16:04 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:18:50.026 04:16:04 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:18:50.026 04:16:04 -- common/autotest_common.sh@1187 -- # return 0 00:18:50.026 04:16:04 -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:18:50.026 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:50.026 04:16:04 -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:18:50.026 04:16:04 -- common/autotest_common.sh@1198 -- # local i=0 00:18:50.026 04:16:04 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:18:50.026 04:16:04 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:18:50.026 04:16:04 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:18:50.026 04:16:04 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:18:50.026 04:16:04 -- common/autotest_common.sh@1210 -- # return 0 00:18:50.026 04:16:04 -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:18:50.026 04:16:04 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:50.026 04:16:04 -- common/autotest_common.sh@10 -- # set +x 00:18:50.026 04:16:04 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:50.026 04:16:04 -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:50.026 04:16:04 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:50.026 04:16:04 -- common/autotest_common.sh@10 -- # set +x 00:18:50.026 04:16:04 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:50.026 04:16:04 -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:18:50.026 04:16:04 -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:18:50.026 04:16:04 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:50.026 04:16:04 -- common/autotest_common.sh@10 -- # set +x 00:18:50.026 04:16:04 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:50.026 04:16:04 -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:50.026 04:16:04 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:50.026 04:16:04 -- common/autotest_common.sh@10 -- # set +x 00:18:50.026 [2024-05-14 04:16:04.555488] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:50.026 04:16:04 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:50.026 04:16:04 -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:18:50.026 04:16:04 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:50.026 04:16:04 -- common/autotest_common.sh@10 -- # set +x 00:18:50.026 04:16:04 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:50.026 04:16:04 -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:18:50.026 04:16:04 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:50.026 04:16:04 -- common/autotest_common.sh@10 -- # set +x 00:18:50.026 04:16:04 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:50.026 04:16:04 -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80ef6226-405e-ee11-906e-a4bf01973fda --hostid=80ef6226-405e-ee11-906e-a4bf01973fda -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:18:51.407 04:16:05 -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:18:51.407 04:16:05 -- common/autotest_common.sh@1177 -- # local i=0 00:18:51.407 04:16:05 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:18:51.407 04:16:05 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:18:51.407 04:16:05 -- common/autotest_common.sh@1184 -- # sleep 2 00:18:53.948 04:16:07 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:18:53.948 04:16:07 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:18:53.948 04:16:07 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:18:53.948 04:16:07 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:18:53.948 04:16:07 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:18:53.948 04:16:07 -- common/autotest_common.sh@1187 -- # return 0 00:18:53.948 04:16:07 -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:18:53.948 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:53.948 04:16:08 -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:18:53.948 04:16:08 -- common/autotest_common.sh@1198 -- # local i=0 00:18:53.948 04:16:08 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:18:53.948 04:16:08 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:18:53.948 04:16:08 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:18:53.948 04:16:08 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:18:53.948 04:16:08 -- common/autotest_common.sh@1210 -- # return 0 00:18:53.948 04:16:08 -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:18:53.948 04:16:08 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:53.948 04:16:08 -- common/autotest_common.sh@10 -- # set +x 00:18:53.948 04:16:08 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:53.948 04:16:08 -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:53.948 04:16:08 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:53.948 04:16:08 -- common/autotest_common.sh@10 -- # set +x 00:18:53.948 04:16:08 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:53.948 04:16:08 -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:18:53.948 04:16:08 -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:18:53.948 04:16:08 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:53.948 04:16:08 -- common/autotest_common.sh@10 -- # set +x 00:18:53.948 04:16:08 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:53.948 04:16:08 -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:53.948 04:16:08 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:53.948 04:16:08 -- common/autotest_common.sh@10 -- # set +x 00:18:53.948 [2024-05-14 04:16:08.212216] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:53.948 04:16:08 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:53.948 04:16:08 -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:18:53.948 04:16:08 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:53.948 04:16:08 -- common/autotest_common.sh@10 -- # set +x 00:18:53.948 04:16:08 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:53.948 04:16:08 -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:18:53.948 04:16:08 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:53.948 04:16:08 -- common/autotest_common.sh@10 -- # set +x 00:18:53.948 04:16:08 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:53.948 04:16:08 -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80ef6226-405e-ee11-906e-a4bf01973fda --hostid=80ef6226-405e-ee11-906e-a4bf01973fda -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:18:55.324 04:16:09 -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:18:55.324 04:16:09 -- common/autotest_common.sh@1177 -- # local i=0 00:18:55.324 04:16:09 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:18:55.324 04:16:09 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:18:55.324 04:16:09 -- common/autotest_common.sh@1184 -- # sleep 2 00:18:57.224 04:16:11 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:18:57.224 04:16:11 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:18:57.224 04:16:11 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:18:57.224 04:16:11 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:18:57.224 04:16:11 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:18:57.224 04:16:11 -- common/autotest_common.sh@1187 -- # return 0 00:18:57.224 04:16:11 -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:18:57.483 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:57.483 04:16:11 -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:18:57.483 04:16:11 -- common/autotest_common.sh@1198 -- # local i=0 00:18:57.483 04:16:11 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:18:57.483 04:16:11 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:18:57.483 04:16:11 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:18:57.483 04:16:11 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:18:57.483 04:16:11 -- common/autotest_common.sh@1210 -- # return 0 00:18:57.483 04:16:11 -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:18:57.483 04:16:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:57.483 04:16:11 -- common/autotest_common.sh@10 -- # set +x 00:18:57.483 04:16:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:57.483 04:16:11 -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:57.483 04:16:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:57.483 04:16:11 -- common/autotest_common.sh@10 -- # set +x 00:18:57.483 04:16:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:57.483 04:16:11 -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:18:57.483 04:16:11 -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:18:57.483 04:16:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:57.483 04:16:11 -- common/autotest_common.sh@10 -- # set +x 00:18:57.483 04:16:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:57.483 04:16:11 -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:57.483 04:16:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:57.483 04:16:11 -- common/autotest_common.sh@10 -- # set +x 00:18:57.483 [2024-05-14 04:16:11.888259] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:57.483 04:16:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:57.483 04:16:11 -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:18:57.483 04:16:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:57.483 04:16:11 -- common/autotest_common.sh@10 -- # set +x 00:18:57.483 04:16:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:57.483 04:16:11 -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:18:57.483 04:16:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:57.483 04:16:11 -- common/autotest_common.sh@10 -- # set +x 00:18:57.483 04:16:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:57.483 04:16:11 -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80ef6226-405e-ee11-906e-a4bf01973fda --hostid=80ef6226-405e-ee11-906e-a4bf01973fda -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:18:58.859 04:16:13 -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:18:58.859 04:16:13 -- common/autotest_common.sh@1177 -- # local i=0 00:18:58.859 04:16:13 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:18:58.859 04:16:13 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:18:58.859 04:16:13 -- common/autotest_common.sh@1184 -- # sleep 2 00:19:00.769 04:16:15 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:19:00.769 04:16:15 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:19:00.769 04:16:15 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:19:00.769 04:16:15 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:19:00.769 04:16:15 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:19:00.769 04:16:15 -- common/autotest_common.sh@1187 -- # return 0 00:19:00.769 04:16:15 -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:19:01.030 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:19:01.030 04:16:15 -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:19:01.030 04:16:15 -- common/autotest_common.sh@1198 -- # local i=0 00:19:01.030 04:16:15 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:19:01.030 04:16:15 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:19:01.030 04:16:15 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:19:01.030 04:16:15 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:19:01.030 04:16:15 -- common/autotest_common.sh@1210 -- # return 0 00:19:01.030 04:16:15 -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:19:01.030 04:16:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:01.030 04:16:15 -- common/autotest_common.sh@10 -- # set +x 00:19:01.030 04:16:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:01.030 04:16:15 -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:01.030 04:16:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:01.030 04:16:15 -- common/autotest_common.sh@10 -- # set +x 00:19:01.030 04:16:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:01.030 04:16:15 -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:19:01.030 04:16:15 -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:19:01.030 04:16:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:01.030 04:16:15 -- common/autotest_common.sh@10 -- # set +x 00:19:01.030 04:16:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:01.030 04:16:15 -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:01.030 04:16:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:01.030 04:16:15 -- common/autotest_common.sh@10 -- # set +x 00:19:01.030 [2024-05-14 04:16:15.576059] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:01.030 04:16:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:01.030 04:16:15 -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:19:01.030 04:16:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:01.030 04:16:15 -- common/autotest_common.sh@10 -- # set +x 00:19:01.030 04:16:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:01.030 04:16:15 -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:19:01.030 04:16:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:01.030 04:16:15 -- common/autotest_common.sh@10 -- # set +x 00:19:01.030 04:16:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:01.030 04:16:15 -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80ef6226-405e-ee11-906e-a4bf01973fda --hostid=80ef6226-405e-ee11-906e-a4bf01973fda -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:19:02.937 04:16:17 -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:19:02.937 04:16:17 -- common/autotest_common.sh@1177 -- # local i=0 00:19:02.937 04:16:17 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:19:02.937 04:16:17 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:19:02.937 04:16:17 -- common/autotest_common.sh@1184 -- # sleep 2 00:19:04.843 04:16:19 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:19:04.843 04:16:19 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:19:04.843 04:16:19 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:19:04.843 04:16:19 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:19:04.843 04:16:19 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:19:04.843 04:16:19 -- common/autotest_common.sh@1187 -- # return 0 00:19:04.843 04:16:19 -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:19:04.843 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:19:04.843 04:16:19 -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:19:04.843 04:16:19 -- common/autotest_common.sh@1198 -- # local i=0 00:19:04.843 04:16:19 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:19:04.843 04:16:19 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:19:04.843 04:16:19 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:19:04.843 04:16:19 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:19:04.843 04:16:19 -- common/autotest_common.sh@1210 -- # return 0 00:19:04.843 04:16:19 -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:19:04.843 04:16:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:04.843 04:16:19 -- common/autotest_common.sh@10 -- # set +x 00:19:04.843 04:16:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:04.843 04:16:19 -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:04.843 04:16:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:04.843 04:16:19 -- common/autotest_common.sh@10 -- # set +x 00:19:04.843 04:16:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:04.843 04:16:19 -- target/rpc.sh@99 -- # seq 1 5 00:19:04.843 04:16:19 -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:19:04.843 04:16:19 -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:19:04.843 04:16:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:04.843 04:16:19 -- common/autotest_common.sh@10 -- # set +x 00:19:04.843 04:16:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:04.843 04:16:19 -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:04.843 04:16:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:04.843 04:16:19 -- common/autotest_common.sh@10 -- # set +x 00:19:04.843 [2024-05-14 04:16:19.301542] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:04.843 04:16:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:04.843 04:16:19 -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:19:04.843 04:16:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:04.843 04:16:19 -- common/autotest_common.sh@10 -- # set +x 00:19:04.843 04:16:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:04.843 04:16:19 -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:19:04.843 04:16:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:04.843 04:16:19 -- common/autotest_common.sh@10 -- # set +x 00:19:04.843 04:16:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:04.843 04:16:19 -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:19:04.843 04:16:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:04.843 04:16:19 -- common/autotest_common.sh@10 -- # set +x 00:19:04.843 04:16:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:04.843 04:16:19 -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:04.843 04:16:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:04.843 04:16:19 -- common/autotest_common.sh@10 -- # set +x 00:19:04.843 04:16:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:04.843 04:16:19 -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:19:04.843 04:16:19 -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:19:04.843 04:16:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:04.843 04:16:19 -- common/autotest_common.sh@10 -- # set +x 00:19:04.843 04:16:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:04.843 04:16:19 -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:04.843 04:16:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:04.843 04:16:19 -- common/autotest_common.sh@10 -- # set +x 00:19:04.843 [2024-05-14 04:16:19.349502] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:04.843 04:16:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:04.843 04:16:19 -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:19:04.843 04:16:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:04.843 04:16:19 -- common/autotest_common.sh@10 -- # set +x 00:19:04.843 04:16:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:04.843 04:16:19 -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:19:04.843 04:16:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:04.843 04:16:19 -- common/autotest_common.sh@10 -- # set +x 00:19:04.843 04:16:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:04.843 04:16:19 -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:19:04.843 04:16:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:04.843 04:16:19 -- common/autotest_common.sh@10 -- # set +x 00:19:04.843 04:16:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:04.843 04:16:19 -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:04.843 04:16:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:04.843 04:16:19 -- common/autotest_common.sh@10 -- # set +x 00:19:04.843 04:16:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:04.843 04:16:19 -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:19:04.843 04:16:19 -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:19:04.843 04:16:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:04.843 04:16:19 -- common/autotest_common.sh@10 -- # set +x 00:19:04.843 04:16:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:04.843 04:16:19 -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:04.843 04:16:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:04.843 04:16:19 -- common/autotest_common.sh@10 -- # set +x 00:19:04.844 [2024-05-14 04:16:19.397538] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:04.844 04:16:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:04.844 04:16:19 -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:19:04.844 04:16:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:04.844 04:16:19 -- common/autotest_common.sh@10 -- # set +x 00:19:04.844 04:16:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:04.844 04:16:19 -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:19:04.844 04:16:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:04.844 04:16:19 -- common/autotest_common.sh@10 -- # set +x 00:19:04.844 04:16:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:04.844 04:16:19 -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:19:04.844 04:16:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:04.844 04:16:19 -- common/autotest_common.sh@10 -- # set +x 00:19:04.844 04:16:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:04.844 04:16:19 -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:04.844 04:16:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:04.844 04:16:19 -- common/autotest_common.sh@10 -- # set +x 00:19:05.103 04:16:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:05.103 04:16:19 -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:19:05.103 04:16:19 -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:19:05.103 04:16:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:05.103 04:16:19 -- common/autotest_common.sh@10 -- # set +x 00:19:05.103 04:16:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:05.103 04:16:19 -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:05.103 04:16:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:05.103 04:16:19 -- common/autotest_common.sh@10 -- # set +x 00:19:05.103 [2024-05-14 04:16:19.445590] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:05.103 04:16:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:05.103 04:16:19 -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:19:05.103 04:16:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:05.103 04:16:19 -- common/autotest_common.sh@10 -- # set +x 00:19:05.103 04:16:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:05.103 04:16:19 -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:19:05.103 04:16:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:05.103 04:16:19 -- common/autotest_common.sh@10 -- # set +x 00:19:05.103 04:16:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:05.103 04:16:19 -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:19:05.103 04:16:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:05.103 04:16:19 -- common/autotest_common.sh@10 -- # set +x 00:19:05.103 04:16:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:05.103 04:16:19 -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:05.103 04:16:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:05.103 04:16:19 -- common/autotest_common.sh@10 -- # set +x 00:19:05.103 04:16:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:05.103 04:16:19 -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:19:05.103 04:16:19 -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:19:05.103 04:16:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:05.103 04:16:19 -- common/autotest_common.sh@10 -- # set +x 00:19:05.103 04:16:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:05.103 04:16:19 -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:05.103 04:16:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:05.103 04:16:19 -- common/autotest_common.sh@10 -- # set +x 00:19:05.103 [2024-05-14 04:16:19.493658] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:05.103 04:16:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:05.103 04:16:19 -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:19:05.103 04:16:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:05.103 04:16:19 -- common/autotest_common.sh@10 -- # set +x 00:19:05.103 04:16:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:05.103 04:16:19 -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:19:05.103 04:16:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:05.103 04:16:19 -- common/autotest_common.sh@10 -- # set +x 00:19:05.103 04:16:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:05.103 04:16:19 -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:19:05.103 04:16:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:05.103 04:16:19 -- common/autotest_common.sh@10 -- # set +x 00:19:05.103 04:16:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:05.103 04:16:19 -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:05.103 04:16:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:05.103 04:16:19 -- common/autotest_common.sh@10 -- # set +x 00:19:05.103 04:16:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:05.103 04:16:19 -- target/rpc.sh@110 -- # rpc_cmd nvmf_get_stats 00:19:05.103 04:16:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:05.103 04:16:19 -- common/autotest_common.sh@10 -- # set +x 00:19:05.103 04:16:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:05.103 04:16:19 -- target/rpc.sh@110 -- # stats='{ 00:19:05.103 "tick_rate": 1900000000, 00:19:05.103 "poll_groups": [ 00:19:05.103 { 00:19:05.103 "name": "nvmf_tgt_poll_group_0", 00:19:05.103 "admin_qpairs": 0, 00:19:05.103 "io_qpairs": 224, 00:19:05.103 "current_admin_qpairs": 0, 00:19:05.103 "current_io_qpairs": 0, 00:19:05.103 "pending_bdev_io": 0, 00:19:05.103 "completed_nvme_io": 421, 00:19:05.103 "transports": [ 00:19:05.103 { 00:19:05.103 "trtype": "TCP" 00:19:05.103 } 00:19:05.103 ] 00:19:05.103 }, 00:19:05.103 { 00:19:05.103 "name": "nvmf_tgt_poll_group_1", 00:19:05.103 "admin_qpairs": 1, 00:19:05.103 "io_qpairs": 223, 00:19:05.103 "current_admin_qpairs": 0, 00:19:05.103 "current_io_qpairs": 0, 00:19:05.103 "pending_bdev_io": 0, 00:19:05.103 "completed_nvme_io": 229, 00:19:05.103 "transports": [ 00:19:05.103 { 00:19:05.103 "trtype": "TCP" 00:19:05.103 } 00:19:05.103 ] 00:19:05.103 }, 00:19:05.103 { 00:19:05.103 "name": "nvmf_tgt_poll_group_2", 00:19:05.103 "admin_qpairs": 6, 00:19:05.103 "io_qpairs": 218, 00:19:05.103 "current_admin_qpairs": 0, 00:19:05.103 "current_io_qpairs": 0, 00:19:05.103 "pending_bdev_io": 0, 00:19:05.103 "completed_nvme_io": 267, 00:19:05.103 "transports": [ 00:19:05.103 { 00:19:05.103 "trtype": "TCP" 00:19:05.103 } 00:19:05.103 ] 00:19:05.103 }, 00:19:05.103 { 00:19:05.103 "name": "nvmf_tgt_poll_group_3", 00:19:05.103 "admin_qpairs": 0, 00:19:05.103 "io_qpairs": 224, 00:19:05.103 "current_admin_qpairs": 0, 00:19:05.103 "current_io_qpairs": 0, 00:19:05.103 "pending_bdev_io": 0, 00:19:05.103 "completed_nvme_io": 322, 00:19:05.103 "transports": [ 00:19:05.103 { 00:19:05.103 "trtype": "TCP" 00:19:05.103 } 00:19:05.103 ] 00:19:05.103 } 00:19:05.103 ] 00:19:05.103 }' 00:19:05.103 04:16:19 -- target/rpc.sh@112 -- # jsum '.poll_groups[].admin_qpairs' 00:19:05.103 04:16:19 -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:19:05.103 04:16:19 -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:19:05.103 04:16:19 -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:19:05.103 04:16:19 -- target/rpc.sh@112 -- # (( 7 > 0 )) 00:19:05.103 04:16:19 -- target/rpc.sh@113 -- # jsum '.poll_groups[].io_qpairs' 00:19:05.103 04:16:19 -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:19:05.103 04:16:19 -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:19:05.103 04:16:19 -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:19:05.103 04:16:19 -- target/rpc.sh@113 -- # (( 889 > 0 )) 00:19:05.103 04:16:19 -- target/rpc.sh@115 -- # '[' rdma == tcp ']' 00:19:05.103 04:16:19 -- target/rpc.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:19:05.103 04:16:19 -- target/rpc.sh@123 -- # nvmftestfini 00:19:05.103 04:16:19 -- nvmf/common.sh@476 -- # nvmfcleanup 00:19:05.103 04:16:19 -- nvmf/common.sh@116 -- # sync 00:19:05.103 04:16:19 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:19:05.103 04:16:19 -- nvmf/common.sh@119 -- # set +e 00:19:05.103 04:16:19 -- nvmf/common.sh@120 -- # for i in {1..20} 00:19:05.103 04:16:19 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:19:05.103 rmmod nvme_tcp 00:19:05.103 rmmod nvme_fabrics 00:19:05.103 rmmod nvme_keyring 00:19:05.103 04:16:19 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:19:05.103 04:16:19 -- nvmf/common.sh@123 -- # set -e 00:19:05.103 04:16:19 -- nvmf/common.sh@124 -- # return 0 00:19:05.103 04:16:19 -- nvmf/common.sh@477 -- # '[' -n 3981789 ']' 00:19:05.103 04:16:19 -- nvmf/common.sh@478 -- # killprocess 3981789 00:19:05.103 04:16:19 -- common/autotest_common.sh@926 -- # '[' -z 3981789 ']' 00:19:05.103 04:16:19 -- common/autotest_common.sh@930 -- # kill -0 3981789 00:19:05.103 04:16:19 -- common/autotest_common.sh@931 -- # uname 00:19:05.103 04:16:19 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:19:05.103 04:16:19 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3981789 00:19:05.362 04:16:19 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:19:05.362 04:16:19 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:19:05.362 04:16:19 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3981789' 00:19:05.362 killing process with pid 3981789 00:19:05.362 04:16:19 -- common/autotest_common.sh@945 -- # kill 3981789 00:19:05.362 04:16:19 -- common/autotest_common.sh@950 -- # wait 3981789 00:19:05.932 04:16:20 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:19:05.932 04:16:20 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:19:05.932 04:16:20 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:19:05.932 04:16:20 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:05.932 04:16:20 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:19:05.932 04:16:20 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:05.932 04:16:20 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:05.932 04:16:20 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:07.841 04:16:22 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:19:07.841 00:19:07.841 real 0m35.202s 00:19:07.841 user 1m50.901s 00:19:07.841 sys 0m5.226s 00:19:07.841 04:16:22 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:07.841 04:16:22 -- common/autotest_common.sh@10 -- # set +x 00:19:07.841 ************************************ 00:19:07.841 END TEST nvmf_rpc 00:19:07.841 ************************************ 00:19:07.841 04:16:22 -- nvmf/nvmf.sh@30 -- # run_test nvmf_invalid /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:19:07.841 04:16:22 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:19:07.841 04:16:22 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:19:07.841 04:16:22 -- common/autotest_common.sh@10 -- # set +x 00:19:07.841 ************************************ 00:19:07.841 START TEST nvmf_invalid 00:19:07.841 ************************************ 00:19:07.841 04:16:22 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:19:07.841 * Looking for test storage... 00:19:08.102 * Found test storage at /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target 00:19:08.102 04:16:22 -- target/invalid.sh@9 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/common.sh 00:19:08.102 04:16:22 -- nvmf/common.sh@7 -- # uname -s 00:19:08.102 04:16:22 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:08.102 04:16:22 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:08.102 04:16:22 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:08.102 04:16:22 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:08.102 04:16:22 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:08.102 04:16:22 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:08.102 04:16:22 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:08.102 04:16:22 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:08.102 04:16:22 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:08.102 04:16:22 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:08.102 04:16:22 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80ef6226-405e-ee11-906e-a4bf01973fda 00:19:08.102 04:16:22 -- nvmf/common.sh@18 -- # NVME_HOSTID=80ef6226-405e-ee11-906e-a4bf01973fda 00:19:08.102 04:16:22 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:08.102 04:16:22 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:08.102 04:16:22 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:19:08.102 04:16:22 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/common.sh 00:19:08.102 04:16:22 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:08.102 04:16:22 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:08.102 04:16:22 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:08.102 04:16:22 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:08.102 04:16:22 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:08.102 04:16:22 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:08.102 04:16:22 -- paths/export.sh@5 -- # export PATH 00:19:08.102 04:16:22 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:08.102 04:16:22 -- nvmf/common.sh@46 -- # : 0 00:19:08.102 04:16:22 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:19:08.102 04:16:22 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:19:08.102 04:16:22 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:19:08.103 04:16:22 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:08.103 04:16:22 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:08.103 04:16:22 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:19:08.103 04:16:22 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:19:08.103 04:16:22 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:19:08.103 04:16:22 -- target/invalid.sh@11 -- # multi_target_rpc=/var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:19:08.103 04:16:22 -- target/invalid.sh@12 -- # rpc=/var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py 00:19:08.103 04:16:22 -- target/invalid.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode 00:19:08.103 04:16:22 -- target/invalid.sh@14 -- # target=foobar 00:19:08.103 04:16:22 -- target/invalid.sh@16 -- # RANDOM=0 00:19:08.103 04:16:22 -- target/invalid.sh@34 -- # nvmftestinit 00:19:08.103 04:16:22 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:19:08.103 04:16:22 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:08.103 04:16:22 -- nvmf/common.sh@436 -- # prepare_net_devs 00:19:08.103 04:16:22 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:19:08.103 04:16:22 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:19:08.103 04:16:22 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:08.103 04:16:22 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:08.103 04:16:22 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:08.103 04:16:22 -- nvmf/common.sh@402 -- # [[ phy-fallback != virt ]] 00:19:08.103 04:16:22 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:19:08.103 04:16:22 -- nvmf/common.sh@284 -- # xtrace_disable 00:19:08.103 04:16:22 -- common/autotest_common.sh@10 -- # set +x 00:19:13.407 04:16:27 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:19:13.407 04:16:27 -- nvmf/common.sh@290 -- # pci_devs=() 00:19:13.407 04:16:27 -- nvmf/common.sh@290 -- # local -a pci_devs 00:19:13.407 04:16:27 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:19:13.407 04:16:27 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:19:13.407 04:16:27 -- nvmf/common.sh@292 -- # pci_drivers=() 00:19:13.407 04:16:27 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:19:13.407 04:16:27 -- nvmf/common.sh@294 -- # net_devs=() 00:19:13.407 04:16:27 -- nvmf/common.sh@294 -- # local -ga net_devs 00:19:13.407 04:16:27 -- nvmf/common.sh@295 -- # e810=() 00:19:13.407 04:16:27 -- nvmf/common.sh@295 -- # local -ga e810 00:19:13.407 04:16:27 -- nvmf/common.sh@296 -- # x722=() 00:19:13.407 04:16:27 -- nvmf/common.sh@296 -- # local -ga x722 00:19:13.407 04:16:27 -- nvmf/common.sh@297 -- # mlx=() 00:19:13.407 04:16:27 -- nvmf/common.sh@297 -- # local -ga mlx 00:19:13.407 04:16:27 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:13.407 04:16:27 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:13.407 04:16:27 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:13.407 04:16:27 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:13.407 04:16:27 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:13.407 04:16:27 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:13.407 04:16:27 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:13.407 04:16:27 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:13.407 04:16:27 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:13.407 04:16:27 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:13.407 04:16:27 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:13.407 04:16:27 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:19:13.407 04:16:27 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:19:13.407 04:16:27 -- nvmf/common.sh@326 -- # [[ '' == mlx5 ]] 00:19:13.407 04:16:27 -- nvmf/common.sh@328 -- # [[ '' == e810 ]] 00:19:13.407 04:16:27 -- nvmf/common.sh@330 -- # [[ '' == x722 ]] 00:19:13.407 04:16:27 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:19:13.407 04:16:27 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:19:13.407 04:16:27 -- nvmf/common.sh@340 -- # echo 'Found 0000:27:00.0 (0x8086 - 0x159b)' 00:19:13.407 Found 0000:27:00.0 (0x8086 - 0x159b) 00:19:13.407 04:16:27 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:19:13.407 04:16:27 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:19:13.407 04:16:27 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:13.407 04:16:27 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:13.407 04:16:27 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:19:13.407 04:16:27 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:19:13.407 04:16:27 -- nvmf/common.sh@340 -- # echo 'Found 0000:27:00.1 (0x8086 - 0x159b)' 00:19:13.407 Found 0000:27:00.1 (0x8086 - 0x159b) 00:19:13.407 04:16:27 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:19:13.407 04:16:27 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:19:13.407 04:16:27 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:13.407 04:16:27 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:13.407 04:16:27 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:19:13.407 04:16:27 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:19:13.407 04:16:27 -- nvmf/common.sh@371 -- # [[ '' == e810 ]] 00:19:13.407 04:16:27 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:19:13.407 04:16:27 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:13.407 04:16:27 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:19:13.407 04:16:27 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:13.407 04:16:27 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:27:00.0: cvl_0_0' 00:19:13.407 Found net devices under 0000:27:00.0: cvl_0_0 00:19:13.407 04:16:27 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:19:13.407 04:16:27 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:19:13.407 04:16:27 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:13.407 04:16:27 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:19:13.407 04:16:27 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:13.407 04:16:27 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:27:00.1: cvl_0_1' 00:19:13.407 Found net devices under 0000:27:00.1: cvl_0_1 00:19:13.407 04:16:27 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:19:13.407 04:16:27 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:19:13.407 04:16:27 -- nvmf/common.sh@402 -- # is_hw=yes 00:19:13.407 04:16:27 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:19:13.407 04:16:27 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:19:13.407 04:16:27 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:19:13.407 04:16:27 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:13.407 04:16:27 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:13.407 04:16:27 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:13.407 04:16:27 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:19:13.407 04:16:27 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:13.407 04:16:27 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:13.407 04:16:27 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:19:13.407 04:16:27 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:13.407 04:16:27 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:13.407 04:16:27 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:19:13.407 04:16:27 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:19:13.407 04:16:27 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:19:13.407 04:16:27 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:13.407 04:16:27 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:13.407 04:16:27 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:13.407 04:16:27 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:19:13.407 04:16:27 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:13.407 04:16:27 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:13.407 04:16:27 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:13.407 04:16:27 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:19:13.407 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:13.407 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.400 ms 00:19:13.407 00:19:13.407 --- 10.0.0.2 ping statistics --- 00:19:13.407 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:13.407 rtt min/avg/max/mdev = 0.400/0.400/0.400/0.000 ms 00:19:13.407 04:16:27 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:13.668 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:13.668 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.397 ms 00:19:13.668 00:19:13.668 --- 10.0.0.1 ping statistics --- 00:19:13.668 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:13.668 rtt min/avg/max/mdev = 0.397/0.397/0.397/0.000 ms 00:19:13.668 04:16:27 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:13.668 04:16:27 -- nvmf/common.sh@410 -- # return 0 00:19:13.668 04:16:27 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:19:13.668 04:16:27 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:13.668 04:16:27 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:19:13.668 04:16:28 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:19:13.668 04:16:28 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:13.668 04:16:28 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:19:13.668 04:16:28 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:19:13.668 04:16:28 -- target/invalid.sh@35 -- # nvmfappstart -m 0xF 00:19:13.668 04:16:28 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:19:13.668 04:16:28 -- common/autotest_common.sh@712 -- # xtrace_disable 00:19:13.668 04:16:28 -- common/autotest_common.sh@10 -- # set +x 00:19:13.668 04:16:28 -- nvmf/common.sh@469 -- # nvmfpid=3991291 00:19:13.668 04:16:28 -- nvmf/common.sh@470 -- # waitforlisten 3991291 00:19:13.668 04:16:28 -- common/autotest_common.sh@819 -- # '[' -z 3991291 ']' 00:19:13.668 04:16:28 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:13.668 04:16:28 -- common/autotest_common.sh@824 -- # local max_retries=100 00:19:13.668 04:16:28 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:13.668 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:13.668 04:16:28 -- common/autotest_common.sh@828 -- # xtrace_disable 00:19:13.668 04:16:28 -- common/autotest_common.sh@10 -- # set +x 00:19:13.668 04:16:28 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:19:13.668 [2024-05-14 04:16:28.119466] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:19:13.668 [2024-05-14 04:16:28.119598] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:13.668 EAL: No free 2048 kB hugepages reported on node 1 00:19:13.929 [2024-05-14 04:16:28.260276] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:13.929 [2024-05-14 04:16:28.355328] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:19:13.929 [2024-05-14 04:16:28.355526] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:13.930 [2024-05-14 04:16:28.355541] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:13.930 [2024-05-14 04:16:28.355550] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:13.930 [2024-05-14 04:16:28.355616] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:19:13.930 [2024-05-14 04:16:28.355655] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:19:13.930 [2024-05-14 04:16:28.355760] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:13.930 [2024-05-14 04:16:28.355770] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:19:14.501 04:16:28 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:19:14.501 04:16:28 -- common/autotest_common.sh@852 -- # return 0 00:19:14.501 04:16:28 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:19:14.501 04:16:28 -- common/autotest_common.sh@718 -- # xtrace_disable 00:19:14.501 04:16:28 -- common/autotest_common.sh@10 -- # set +x 00:19:14.501 04:16:28 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:14.501 04:16:28 -- target/invalid.sh@37 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:19:14.501 04:16:28 -- target/invalid.sh@40 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -t foobar nqn.2016-06.io.spdk:cnode5105 00:19:14.501 [2024-05-14 04:16:29.000106] nvmf_rpc.c: 401:rpc_nvmf_create_subsystem: *ERROR*: Unable to find target foobar 00:19:14.501 04:16:29 -- target/invalid.sh@40 -- # out='request: 00:19:14.501 { 00:19:14.501 "nqn": "nqn.2016-06.io.spdk:cnode5105", 00:19:14.501 "tgt_name": "foobar", 00:19:14.501 "method": "nvmf_create_subsystem", 00:19:14.501 "req_id": 1 00:19:14.501 } 00:19:14.501 Got JSON-RPC error response 00:19:14.502 response: 00:19:14.502 { 00:19:14.502 "code": -32603, 00:19:14.502 "message": "Unable to find target foobar" 00:19:14.502 }' 00:19:14.502 04:16:29 -- target/invalid.sh@41 -- # [[ request: 00:19:14.502 { 00:19:14.502 "nqn": "nqn.2016-06.io.spdk:cnode5105", 00:19:14.502 "tgt_name": "foobar", 00:19:14.502 "method": "nvmf_create_subsystem", 00:19:14.502 "req_id": 1 00:19:14.502 } 00:19:14.502 Got JSON-RPC error response 00:19:14.502 response: 00:19:14.502 { 00:19:14.502 "code": -32603, 00:19:14.502 "message": "Unable to find target foobar" 00:19:14.502 } == *\U\n\a\b\l\e\ \t\o\ \f\i\n\d\ \t\a\r\g\e\t* ]] 00:19:14.502 04:16:29 -- target/invalid.sh@45 -- # echo -e '\x1f' 00:19:14.502 04:16:29 -- target/invalid.sh@45 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s $'SPDKISFASTANDAWESOME\037' nqn.2016-06.io.spdk:cnode16170 00:19:14.772 [2024-05-14 04:16:29.160373] nvmf_rpc.c: 418:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode16170: invalid serial number 'SPDKISFASTANDAWESOME' 00:19:14.772 04:16:29 -- target/invalid.sh@45 -- # out='request: 00:19:14.772 { 00:19:14.772 "nqn": "nqn.2016-06.io.spdk:cnode16170", 00:19:14.772 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:19:14.772 "method": "nvmf_create_subsystem", 00:19:14.772 "req_id": 1 00:19:14.772 } 00:19:14.772 Got JSON-RPC error response 00:19:14.772 response: 00:19:14.772 { 00:19:14.772 "code": -32602, 00:19:14.772 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:19:14.772 }' 00:19:14.772 04:16:29 -- target/invalid.sh@46 -- # [[ request: 00:19:14.772 { 00:19:14.772 "nqn": "nqn.2016-06.io.spdk:cnode16170", 00:19:14.772 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:19:14.772 "method": "nvmf_create_subsystem", 00:19:14.772 "req_id": 1 00:19:14.772 } 00:19:14.772 Got JSON-RPC error response 00:19:14.772 response: 00:19:14.772 { 00:19:14.772 "code": -32602, 00:19:14.772 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:19:14.772 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:19:14.772 04:16:29 -- target/invalid.sh@50 -- # echo -e '\x1f' 00:19:14.772 04:16:29 -- target/invalid.sh@50 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d $'SPDK_Controller\037' nqn.2016-06.io.spdk:cnode17042 00:19:14.772 [2024-05-14 04:16:29.328499] nvmf_rpc.c: 427:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode17042: invalid model number 'SPDK_Controller' 00:19:14.772 04:16:29 -- target/invalid.sh@50 -- # out='request: 00:19:14.772 { 00:19:14.772 "nqn": "nqn.2016-06.io.spdk:cnode17042", 00:19:14.772 "model_number": "SPDK_Controller\u001f", 00:19:14.772 "method": "nvmf_create_subsystem", 00:19:14.772 "req_id": 1 00:19:14.772 } 00:19:14.772 Got JSON-RPC error response 00:19:14.772 response: 00:19:14.772 { 00:19:14.772 "code": -32602, 00:19:14.772 "message": "Invalid MN SPDK_Controller\u001f" 00:19:14.772 }' 00:19:14.772 04:16:29 -- target/invalid.sh@51 -- # [[ request: 00:19:14.772 { 00:19:14.772 "nqn": "nqn.2016-06.io.spdk:cnode17042", 00:19:14.772 "model_number": "SPDK_Controller\u001f", 00:19:14.772 "method": "nvmf_create_subsystem", 00:19:14.772 "req_id": 1 00:19:14.772 } 00:19:14.772 Got JSON-RPC error response 00:19:14.772 response: 00:19:14.772 { 00:19:14.772 "code": -32602, 00:19:14.772 "message": "Invalid MN SPDK_Controller\u001f" 00:19:14.772 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:19:14.772 04:16:29 -- target/invalid.sh@54 -- # gen_random_s 21 00:19:14.772 04:16:29 -- target/invalid.sh@19 -- # local length=21 ll 00:19:14.772 04:16:29 -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:19:14.772 04:16:29 -- target/invalid.sh@21 -- # local chars 00:19:14.772 04:16:29 -- target/invalid.sh@22 -- # local string 00:19:14.772 04:16:29 -- target/invalid.sh@24 -- # (( ll = 0 )) 00:19:14.772 04:16:29 -- target/invalid.sh@24 -- # (( ll < length )) 00:19:15.034 04:16:29 -- target/invalid.sh@25 -- # printf %x 88 00:19:15.034 04:16:29 -- target/invalid.sh@25 -- # echo -e '\x58' 00:19:15.034 04:16:29 -- target/invalid.sh@25 -- # string+=X 00:19:15.034 04:16:29 -- target/invalid.sh@24 -- # (( ll++ )) 00:19:15.034 04:16:29 -- target/invalid.sh@24 -- # (( ll < length )) 00:19:15.034 04:16:29 -- target/invalid.sh@25 -- # printf %x 86 00:19:15.034 04:16:29 -- target/invalid.sh@25 -- # echo -e '\x56' 00:19:15.034 04:16:29 -- target/invalid.sh@25 -- # string+=V 00:19:15.034 04:16:29 -- target/invalid.sh@24 -- # (( ll++ )) 00:19:15.034 04:16:29 -- target/invalid.sh@24 -- # (( ll < length )) 00:19:15.034 04:16:29 -- target/invalid.sh@25 -- # printf %x 69 00:19:15.034 04:16:29 -- target/invalid.sh@25 -- # echo -e '\x45' 00:19:15.034 04:16:29 -- target/invalid.sh@25 -- # string+=E 00:19:15.034 04:16:29 -- target/invalid.sh@24 -- # (( ll++ )) 00:19:15.034 04:16:29 -- target/invalid.sh@24 -- # (( ll < length )) 00:19:15.034 04:16:29 -- target/invalid.sh@25 -- # printf %x 86 00:19:15.034 04:16:29 -- target/invalid.sh@25 -- # echo -e '\x56' 00:19:15.034 04:16:29 -- target/invalid.sh@25 -- # string+=V 00:19:15.034 04:16:29 -- target/invalid.sh@24 -- # (( ll++ )) 00:19:15.034 04:16:29 -- target/invalid.sh@24 -- # (( ll < length )) 00:19:15.034 04:16:29 -- target/invalid.sh@25 -- # printf %x 113 00:19:15.034 04:16:29 -- target/invalid.sh@25 -- # echo -e '\x71' 00:19:15.034 04:16:29 -- target/invalid.sh@25 -- # string+=q 00:19:15.034 04:16:29 -- target/invalid.sh@24 -- # (( ll++ )) 00:19:15.034 04:16:29 -- target/invalid.sh@24 -- # (( ll < length )) 00:19:15.034 04:16:29 -- target/invalid.sh@25 -- # printf %x 120 00:19:15.034 04:16:29 -- target/invalid.sh@25 -- # echo -e '\x78' 00:19:15.034 04:16:29 -- target/invalid.sh@25 -- # string+=x 00:19:15.034 04:16:29 -- target/invalid.sh@24 -- # (( ll++ )) 00:19:15.034 04:16:29 -- target/invalid.sh@24 -- # (( ll < length )) 00:19:15.034 04:16:29 -- target/invalid.sh@25 -- # printf %x 103 00:19:15.034 04:16:29 -- target/invalid.sh@25 -- # echo -e '\x67' 00:19:15.034 04:16:29 -- target/invalid.sh@25 -- # string+=g 00:19:15.034 04:16:29 -- target/invalid.sh@24 -- # (( ll++ )) 00:19:15.034 04:16:29 -- target/invalid.sh@24 -- # (( ll < length )) 00:19:15.034 04:16:29 -- target/invalid.sh@25 -- # printf %x 70 00:19:15.034 04:16:29 -- target/invalid.sh@25 -- # echo -e '\x46' 00:19:15.034 04:16:29 -- target/invalid.sh@25 -- # string+=F 00:19:15.034 04:16:29 -- target/invalid.sh@24 -- # (( ll++ )) 00:19:15.034 04:16:29 -- target/invalid.sh@24 -- # (( ll < length )) 00:19:15.034 04:16:29 -- target/invalid.sh@25 -- # printf %x 127 00:19:15.034 04:16:29 -- target/invalid.sh@25 -- # echo -e '\x7f' 00:19:15.034 04:16:29 -- target/invalid.sh@25 -- # string+=$'\177' 00:19:15.034 04:16:29 -- target/invalid.sh@24 -- # (( ll++ )) 00:19:15.034 04:16:29 -- target/invalid.sh@24 -- # (( ll < length )) 00:19:15.034 04:16:29 -- target/invalid.sh@25 -- # printf %x 127 00:19:15.034 04:16:29 -- target/invalid.sh@25 -- # echo -e '\x7f' 00:19:15.034 04:16:29 -- target/invalid.sh@25 -- # string+=$'\177' 00:19:15.034 04:16:29 -- target/invalid.sh@24 -- # (( ll++ )) 00:19:15.034 04:16:29 -- target/invalid.sh@24 -- # (( ll < length )) 00:19:15.034 04:16:29 -- target/invalid.sh@25 -- # printf %x 103 00:19:15.034 04:16:29 -- target/invalid.sh@25 -- # echo -e '\x67' 00:19:15.034 04:16:29 -- target/invalid.sh@25 -- # string+=g 00:19:15.034 04:16:29 -- target/invalid.sh@24 -- # (( ll++ )) 00:19:15.034 04:16:29 -- target/invalid.sh@24 -- # (( ll < length )) 00:19:15.034 04:16:29 -- target/invalid.sh@25 -- # printf %x 50 00:19:15.034 04:16:29 -- target/invalid.sh@25 -- # echo -e '\x32' 00:19:15.034 04:16:29 -- target/invalid.sh@25 -- # string+=2 00:19:15.034 04:16:29 -- target/invalid.sh@24 -- # (( ll++ )) 00:19:15.034 04:16:29 -- target/invalid.sh@24 -- # (( ll < length )) 00:19:15.034 04:16:29 -- target/invalid.sh@25 -- # printf %x 119 00:19:15.034 04:16:29 -- target/invalid.sh@25 -- # echo -e '\x77' 00:19:15.034 04:16:29 -- target/invalid.sh@25 -- # string+=w 00:19:15.034 04:16:29 -- target/invalid.sh@24 -- # (( ll++ )) 00:19:15.034 04:16:29 -- target/invalid.sh@24 -- # (( ll < length )) 00:19:15.034 04:16:29 -- target/invalid.sh@25 -- # printf %x 32 00:19:15.034 04:16:29 -- target/invalid.sh@25 -- # echo -e '\x20' 00:19:15.034 04:16:29 -- target/invalid.sh@25 -- # string+=' ' 00:19:15.034 04:16:29 -- target/invalid.sh@24 -- # (( ll++ )) 00:19:15.034 04:16:29 -- target/invalid.sh@24 -- # (( ll < length )) 00:19:15.034 04:16:29 -- target/invalid.sh@25 -- # printf %x 73 00:19:15.034 04:16:29 -- target/invalid.sh@25 -- # echo -e '\x49' 00:19:15.034 04:16:29 -- target/invalid.sh@25 -- # string+=I 00:19:15.034 04:16:29 -- target/invalid.sh@24 -- # (( ll++ )) 00:19:15.034 04:16:29 -- target/invalid.sh@24 -- # (( ll < length )) 00:19:15.034 04:16:29 -- target/invalid.sh@25 -- # printf %x 108 00:19:15.034 04:16:29 -- target/invalid.sh@25 -- # echo -e '\x6c' 00:19:15.034 04:16:29 -- target/invalid.sh@25 -- # string+=l 00:19:15.034 04:16:29 -- target/invalid.sh@24 -- # (( ll++ )) 00:19:15.034 04:16:29 -- target/invalid.sh@24 -- # (( ll < length )) 00:19:15.034 04:16:29 -- target/invalid.sh@25 -- # printf %x 35 00:19:15.034 04:16:29 -- target/invalid.sh@25 -- # echo -e '\x23' 00:19:15.034 04:16:29 -- target/invalid.sh@25 -- # string+='#' 00:19:15.034 04:16:29 -- target/invalid.sh@24 -- # (( ll++ )) 00:19:15.034 04:16:29 -- target/invalid.sh@24 -- # (( ll < length )) 00:19:15.034 04:16:29 -- target/invalid.sh@25 -- # printf %x 101 00:19:15.034 04:16:29 -- target/invalid.sh@25 -- # echo -e '\x65' 00:19:15.034 04:16:29 -- target/invalid.sh@25 -- # string+=e 00:19:15.035 04:16:29 -- target/invalid.sh@24 -- # (( ll++ )) 00:19:15.035 04:16:29 -- target/invalid.sh@24 -- # (( ll < length )) 00:19:15.035 04:16:29 -- target/invalid.sh@25 -- # printf %x 72 00:19:15.035 04:16:29 -- target/invalid.sh@25 -- # echo -e '\x48' 00:19:15.035 04:16:29 -- target/invalid.sh@25 -- # string+=H 00:19:15.035 04:16:29 -- target/invalid.sh@24 -- # (( ll++ )) 00:19:15.035 04:16:29 -- target/invalid.sh@24 -- # (( ll < length )) 00:19:15.035 04:16:29 -- target/invalid.sh@25 -- # printf %x 44 00:19:15.035 04:16:29 -- target/invalid.sh@25 -- # echo -e '\x2c' 00:19:15.035 04:16:29 -- target/invalid.sh@25 -- # string+=, 00:19:15.035 04:16:29 -- target/invalid.sh@24 -- # (( ll++ )) 00:19:15.035 04:16:29 -- target/invalid.sh@24 -- # (( ll < length )) 00:19:15.035 04:16:29 -- target/invalid.sh@25 -- # printf %x 66 00:19:15.035 04:16:29 -- target/invalid.sh@25 -- # echo -e '\x42' 00:19:15.035 04:16:29 -- target/invalid.sh@25 -- # string+=B 00:19:15.035 04:16:29 -- target/invalid.sh@24 -- # (( ll++ )) 00:19:15.035 04:16:29 -- target/invalid.sh@24 -- # (( ll < length )) 00:19:15.035 04:16:29 -- target/invalid.sh@28 -- # [[ X == \- ]] 00:19:15.035 04:16:29 -- target/invalid.sh@31 -- # echo 'XVEVqxgFg2w Il#eH,B' 00:19:15.035 04:16:29 -- target/invalid.sh@54 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s 'XVEVqxgFg2w Il#eH,B' nqn.2016-06.io.spdk:cnode7088 00:19:15.296 [2024-05-14 04:16:29.624906] nvmf_rpc.c: 418:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode7088: invalid serial number 'XVEVqxgFg2w Il#eH,B' 00:19:15.296 04:16:29 -- target/invalid.sh@54 -- # out='request: 00:19:15.296 { 00:19:15.296 "nqn": "nqn.2016-06.io.spdk:cnode7088", 00:19:15.296 "serial_number": "XVEVqxgF\u007f\u007fg2w Il#eH,B", 00:19:15.296 "method": "nvmf_create_subsystem", 00:19:15.296 "req_id": 1 00:19:15.296 } 00:19:15.296 Got JSON-RPC error response 00:19:15.296 response: 00:19:15.296 { 00:19:15.296 "code": -32602, 00:19:15.296 "message": "Invalid SN XVEVqxgF\u007f\u007fg2w Il#eH,B" 00:19:15.296 }' 00:19:15.296 04:16:29 -- target/invalid.sh@55 -- # [[ request: 00:19:15.296 { 00:19:15.296 "nqn": "nqn.2016-06.io.spdk:cnode7088", 00:19:15.296 "serial_number": "XVEVqxgF\u007f\u007fg2w Il#eH,B", 00:19:15.296 "method": "nvmf_create_subsystem", 00:19:15.296 "req_id": 1 00:19:15.296 } 00:19:15.296 Got JSON-RPC error response 00:19:15.296 response: 00:19:15.296 { 00:19:15.296 "code": -32602, 00:19:15.296 "message": "Invalid SN XVEVqxgF\u007f\u007fg2w Il#eH,B" 00:19:15.296 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:19:15.296 04:16:29 -- target/invalid.sh@58 -- # gen_random_s 41 00:19:15.296 04:16:29 -- target/invalid.sh@19 -- # local length=41 ll 00:19:15.296 04:16:29 -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:19:15.296 04:16:29 -- target/invalid.sh@21 -- # local chars 00:19:15.296 04:16:29 -- target/invalid.sh@22 -- # local string 00:19:15.296 04:16:29 -- target/invalid.sh@24 -- # (( ll = 0 )) 00:19:15.296 04:16:29 -- target/invalid.sh@24 -- # (( ll < length )) 00:19:15.296 04:16:29 -- target/invalid.sh@25 -- # printf %x 93 00:19:15.296 04:16:29 -- target/invalid.sh@25 -- # echo -e '\x5d' 00:19:15.296 04:16:29 -- target/invalid.sh@25 -- # string+=']' 00:19:15.296 04:16:29 -- target/invalid.sh@24 -- # (( ll++ )) 00:19:15.296 04:16:29 -- target/invalid.sh@24 -- # (( ll < length )) 00:19:15.296 04:16:29 -- target/invalid.sh@25 -- # printf %x 47 00:19:15.296 04:16:29 -- target/invalid.sh@25 -- # echo -e '\x2f' 00:19:15.296 04:16:29 -- target/invalid.sh@25 -- # string+=/ 00:19:15.296 04:16:29 -- target/invalid.sh@24 -- # (( ll++ )) 00:19:15.296 04:16:29 -- target/invalid.sh@24 -- # (( ll < length )) 00:19:15.296 04:16:29 -- target/invalid.sh@25 -- # printf %x 95 00:19:15.296 04:16:29 -- target/invalid.sh@25 -- # echo -e '\x5f' 00:19:15.296 04:16:29 -- target/invalid.sh@25 -- # string+=_ 00:19:15.296 04:16:29 -- target/invalid.sh@24 -- # (( ll++ )) 00:19:15.296 04:16:29 -- target/invalid.sh@24 -- # (( ll < length )) 00:19:15.296 04:16:29 -- target/invalid.sh@25 -- # printf %x 32 00:19:15.296 04:16:29 -- target/invalid.sh@25 -- # echo -e '\x20' 00:19:15.296 04:16:29 -- target/invalid.sh@25 -- # string+=' ' 00:19:15.296 04:16:29 -- target/invalid.sh@24 -- # (( ll++ )) 00:19:15.296 04:16:29 -- target/invalid.sh@24 -- # (( ll < length )) 00:19:15.296 04:16:29 -- target/invalid.sh@25 -- # printf %x 78 00:19:15.296 04:16:29 -- target/invalid.sh@25 -- # echo -e '\x4e' 00:19:15.296 04:16:29 -- target/invalid.sh@25 -- # string+=N 00:19:15.296 04:16:29 -- target/invalid.sh@24 -- # (( ll++ )) 00:19:15.296 04:16:29 -- target/invalid.sh@24 -- # (( ll < length )) 00:19:15.296 04:16:29 -- target/invalid.sh@25 -- # printf %x 59 00:19:15.296 04:16:29 -- target/invalid.sh@25 -- # echo -e '\x3b' 00:19:15.296 04:16:29 -- target/invalid.sh@25 -- # string+=';' 00:19:15.296 04:16:29 -- target/invalid.sh@24 -- # (( ll++ )) 00:19:15.296 04:16:29 -- target/invalid.sh@24 -- # (( ll < length )) 00:19:15.296 04:16:29 -- target/invalid.sh@25 -- # printf %x 82 00:19:15.296 04:16:29 -- target/invalid.sh@25 -- # echo -e '\x52' 00:19:15.297 04:16:29 -- target/invalid.sh@25 -- # string+=R 00:19:15.297 04:16:29 -- target/invalid.sh@24 -- # (( ll++ )) 00:19:15.297 04:16:29 -- target/invalid.sh@24 -- # (( ll < length )) 00:19:15.297 04:16:29 -- target/invalid.sh@25 -- # printf %x 50 00:19:15.297 04:16:29 -- target/invalid.sh@25 -- # echo -e '\x32' 00:19:15.297 04:16:29 -- target/invalid.sh@25 -- # string+=2 00:19:15.297 04:16:29 -- target/invalid.sh@24 -- # (( ll++ )) 00:19:15.297 04:16:29 -- target/invalid.sh@24 -- # (( ll < length )) 00:19:15.297 04:16:29 -- target/invalid.sh@25 -- # printf %x 101 00:19:15.297 04:16:29 -- target/invalid.sh@25 -- # echo -e '\x65' 00:19:15.297 04:16:29 -- target/invalid.sh@25 -- # string+=e 00:19:15.297 04:16:29 -- target/invalid.sh@24 -- # (( ll++ )) 00:19:15.297 04:16:29 -- target/invalid.sh@24 -- # (( ll < length )) 00:19:15.297 04:16:29 -- target/invalid.sh@25 -- # printf %x 34 00:19:15.297 04:16:29 -- target/invalid.sh@25 -- # echo -e '\x22' 00:19:15.297 04:16:29 -- target/invalid.sh@25 -- # string+='"' 00:19:15.297 04:16:29 -- target/invalid.sh@24 -- # (( ll++ )) 00:19:15.297 04:16:29 -- target/invalid.sh@24 -- # (( ll < length )) 00:19:15.297 04:16:29 -- target/invalid.sh@25 -- # printf %x 115 00:19:15.297 04:16:29 -- target/invalid.sh@25 -- # echo -e '\x73' 00:19:15.297 04:16:29 -- target/invalid.sh@25 -- # string+=s 00:19:15.297 04:16:29 -- target/invalid.sh@24 -- # (( ll++ )) 00:19:15.297 04:16:29 -- target/invalid.sh@24 -- # (( ll < length )) 00:19:15.297 04:16:29 -- target/invalid.sh@25 -- # printf %x 59 00:19:15.297 04:16:29 -- target/invalid.sh@25 -- # echo -e '\x3b' 00:19:15.297 04:16:29 -- target/invalid.sh@25 -- # string+=';' 00:19:15.297 04:16:29 -- target/invalid.sh@24 -- # (( ll++ )) 00:19:15.297 04:16:29 -- target/invalid.sh@24 -- # (( ll < length )) 00:19:15.297 04:16:29 -- target/invalid.sh@25 -- # printf %x 47 00:19:15.297 04:16:29 -- target/invalid.sh@25 -- # echo -e '\x2f' 00:19:15.297 04:16:29 -- target/invalid.sh@25 -- # string+=/ 00:19:15.297 04:16:29 -- target/invalid.sh@24 -- # (( ll++ )) 00:19:15.297 04:16:29 -- target/invalid.sh@24 -- # (( ll < length )) 00:19:15.297 04:16:29 -- target/invalid.sh@25 -- # printf %x 70 00:19:15.297 04:16:29 -- target/invalid.sh@25 -- # echo -e '\x46' 00:19:15.297 04:16:29 -- target/invalid.sh@25 -- # string+=F 00:19:15.297 04:16:29 -- target/invalid.sh@24 -- # (( ll++ )) 00:19:15.297 04:16:29 -- target/invalid.sh@24 -- # (( ll < length )) 00:19:15.297 04:16:29 -- target/invalid.sh@25 -- # printf %x 45 00:19:15.297 04:16:29 -- target/invalid.sh@25 -- # echo -e '\x2d' 00:19:15.297 04:16:29 -- target/invalid.sh@25 -- # string+=- 00:19:15.297 04:16:29 -- target/invalid.sh@24 -- # (( ll++ )) 00:19:15.297 04:16:29 -- target/invalid.sh@24 -- # (( ll < length )) 00:19:15.297 04:16:29 -- target/invalid.sh@25 -- # printf %x 108 00:19:15.297 04:16:29 -- target/invalid.sh@25 -- # echo -e '\x6c' 00:19:15.297 04:16:29 -- target/invalid.sh@25 -- # string+=l 00:19:15.297 04:16:29 -- target/invalid.sh@24 -- # (( ll++ )) 00:19:15.297 04:16:29 -- target/invalid.sh@24 -- # (( ll < length )) 00:19:15.297 04:16:29 -- target/invalid.sh@25 -- # printf %x 70 00:19:15.297 04:16:29 -- target/invalid.sh@25 -- # echo -e '\x46' 00:19:15.297 04:16:29 -- target/invalid.sh@25 -- # string+=F 00:19:15.297 04:16:29 -- target/invalid.sh@24 -- # (( ll++ )) 00:19:15.297 04:16:29 -- target/invalid.sh@24 -- # (( ll < length )) 00:19:15.297 04:16:29 -- target/invalid.sh@25 -- # printf %x 105 00:19:15.297 04:16:29 -- target/invalid.sh@25 -- # echo -e '\x69' 00:19:15.297 04:16:29 -- target/invalid.sh@25 -- # string+=i 00:19:15.297 04:16:29 -- target/invalid.sh@24 -- # (( ll++ )) 00:19:15.297 04:16:29 -- target/invalid.sh@24 -- # (( ll < length )) 00:19:15.297 04:16:29 -- target/invalid.sh@25 -- # printf %x 76 00:19:15.297 04:16:29 -- target/invalid.sh@25 -- # echo -e '\x4c' 00:19:15.297 04:16:29 -- target/invalid.sh@25 -- # string+=L 00:19:15.297 04:16:29 -- target/invalid.sh@24 -- # (( ll++ )) 00:19:15.297 04:16:29 -- target/invalid.sh@24 -- # (( ll < length )) 00:19:15.297 04:16:29 -- target/invalid.sh@25 -- # printf %x 103 00:19:15.297 04:16:29 -- target/invalid.sh@25 -- # echo -e '\x67' 00:19:15.297 04:16:29 -- target/invalid.sh@25 -- # string+=g 00:19:15.297 04:16:29 -- target/invalid.sh@24 -- # (( ll++ )) 00:19:15.297 04:16:29 -- target/invalid.sh@24 -- # (( ll < length )) 00:19:15.297 04:16:29 -- target/invalid.sh@25 -- # printf %x 56 00:19:15.297 04:16:29 -- target/invalid.sh@25 -- # echo -e '\x38' 00:19:15.297 04:16:29 -- target/invalid.sh@25 -- # string+=8 00:19:15.297 04:16:29 -- target/invalid.sh@24 -- # (( ll++ )) 00:19:15.297 04:16:29 -- target/invalid.sh@24 -- # (( ll < length )) 00:19:15.297 04:16:29 -- target/invalid.sh@25 -- # printf %x 62 00:19:15.297 04:16:29 -- target/invalid.sh@25 -- # echo -e '\x3e' 00:19:15.297 04:16:29 -- target/invalid.sh@25 -- # string+='>' 00:19:15.297 04:16:29 -- target/invalid.sh@24 -- # (( ll++ )) 00:19:15.297 04:16:29 -- target/invalid.sh@24 -- # (( ll < length )) 00:19:15.297 04:16:29 -- target/invalid.sh@25 -- # printf %x 48 00:19:15.297 04:16:29 -- target/invalid.sh@25 -- # echo -e '\x30' 00:19:15.297 04:16:29 -- target/invalid.sh@25 -- # string+=0 00:19:15.297 04:16:29 -- target/invalid.sh@24 -- # (( ll++ )) 00:19:15.297 04:16:29 -- target/invalid.sh@24 -- # (( ll < length )) 00:19:15.297 04:16:29 -- target/invalid.sh@25 -- # printf %x 114 00:19:15.297 04:16:29 -- target/invalid.sh@25 -- # echo -e '\x72' 00:19:15.297 04:16:29 -- target/invalid.sh@25 -- # string+=r 00:19:15.297 04:16:29 -- target/invalid.sh@24 -- # (( ll++ )) 00:19:15.297 04:16:29 -- target/invalid.sh@24 -- # (( ll < length )) 00:19:15.297 04:16:29 -- target/invalid.sh@25 -- # printf %x 90 00:19:15.297 04:16:29 -- target/invalid.sh@25 -- # echo -e '\x5a' 00:19:15.297 04:16:29 -- target/invalid.sh@25 -- # string+=Z 00:19:15.297 04:16:29 -- target/invalid.sh@24 -- # (( ll++ )) 00:19:15.297 04:16:29 -- target/invalid.sh@24 -- # (( ll < length )) 00:19:15.297 04:16:29 -- target/invalid.sh@25 -- # printf %x 71 00:19:15.297 04:16:29 -- target/invalid.sh@25 -- # echo -e '\x47' 00:19:15.297 04:16:29 -- target/invalid.sh@25 -- # string+=G 00:19:15.297 04:16:29 -- target/invalid.sh@24 -- # (( ll++ )) 00:19:15.297 04:16:29 -- target/invalid.sh@24 -- # (( ll < length )) 00:19:15.297 04:16:29 -- target/invalid.sh@25 -- # printf %x 98 00:19:15.297 04:16:29 -- target/invalid.sh@25 -- # echo -e '\x62' 00:19:15.297 04:16:29 -- target/invalid.sh@25 -- # string+=b 00:19:15.297 04:16:29 -- target/invalid.sh@24 -- # (( ll++ )) 00:19:15.297 04:16:29 -- target/invalid.sh@24 -- # (( ll < length )) 00:19:15.297 04:16:29 -- target/invalid.sh@25 -- # printf %x 84 00:19:15.297 04:16:29 -- target/invalid.sh@25 -- # echo -e '\x54' 00:19:15.297 04:16:29 -- target/invalid.sh@25 -- # string+=T 00:19:15.297 04:16:29 -- target/invalid.sh@24 -- # (( ll++ )) 00:19:15.297 04:16:29 -- target/invalid.sh@24 -- # (( ll < length )) 00:19:15.297 04:16:29 -- target/invalid.sh@25 -- # printf %x 58 00:19:15.297 04:16:29 -- target/invalid.sh@25 -- # echo -e '\x3a' 00:19:15.297 04:16:29 -- target/invalid.sh@25 -- # string+=: 00:19:15.297 04:16:29 -- target/invalid.sh@24 -- # (( ll++ )) 00:19:15.297 04:16:29 -- target/invalid.sh@24 -- # (( ll < length )) 00:19:15.297 04:16:29 -- target/invalid.sh@25 -- # printf %x 77 00:19:15.297 04:16:29 -- target/invalid.sh@25 -- # echo -e '\x4d' 00:19:15.297 04:16:29 -- target/invalid.sh@25 -- # string+=M 00:19:15.297 04:16:29 -- target/invalid.sh@24 -- # (( ll++ )) 00:19:15.297 04:16:29 -- target/invalid.sh@24 -- # (( ll < length )) 00:19:15.297 04:16:29 -- target/invalid.sh@25 -- # printf %x 98 00:19:15.297 04:16:29 -- target/invalid.sh@25 -- # echo -e '\x62' 00:19:15.297 04:16:29 -- target/invalid.sh@25 -- # string+=b 00:19:15.297 04:16:29 -- target/invalid.sh@24 -- # (( ll++ )) 00:19:15.297 04:16:29 -- target/invalid.sh@24 -- # (( ll < length )) 00:19:15.297 04:16:29 -- target/invalid.sh@25 -- # printf %x 87 00:19:15.297 04:16:29 -- target/invalid.sh@25 -- # echo -e '\x57' 00:19:15.297 04:16:29 -- target/invalid.sh@25 -- # string+=W 00:19:15.297 04:16:29 -- target/invalid.sh@24 -- # (( ll++ )) 00:19:15.297 04:16:29 -- target/invalid.sh@24 -- # (( ll < length )) 00:19:15.297 04:16:29 -- target/invalid.sh@25 -- # printf %x 66 00:19:15.297 04:16:29 -- target/invalid.sh@25 -- # echo -e '\x42' 00:19:15.297 04:16:29 -- target/invalid.sh@25 -- # string+=B 00:19:15.297 04:16:29 -- target/invalid.sh@24 -- # (( ll++ )) 00:19:15.297 04:16:29 -- target/invalid.sh@24 -- # (( ll < length )) 00:19:15.297 04:16:29 -- target/invalid.sh@25 -- # printf %x 101 00:19:15.297 04:16:29 -- target/invalid.sh@25 -- # echo -e '\x65' 00:19:15.297 04:16:29 -- target/invalid.sh@25 -- # string+=e 00:19:15.297 04:16:29 -- target/invalid.sh@24 -- # (( ll++ )) 00:19:15.297 04:16:29 -- target/invalid.sh@24 -- # (( ll < length )) 00:19:15.297 04:16:29 -- target/invalid.sh@25 -- # printf %x 52 00:19:15.297 04:16:29 -- target/invalid.sh@25 -- # echo -e '\x34' 00:19:15.297 04:16:29 -- target/invalid.sh@25 -- # string+=4 00:19:15.297 04:16:29 -- target/invalid.sh@24 -- # (( ll++ )) 00:19:15.297 04:16:29 -- target/invalid.sh@24 -- # (( ll < length )) 00:19:15.297 04:16:29 -- target/invalid.sh@25 -- # printf %x 51 00:19:15.297 04:16:29 -- target/invalid.sh@25 -- # echo -e '\x33' 00:19:15.297 04:16:29 -- target/invalid.sh@25 -- # string+=3 00:19:15.297 04:16:29 -- target/invalid.sh@24 -- # (( ll++ )) 00:19:15.297 04:16:29 -- target/invalid.sh@24 -- # (( ll < length )) 00:19:15.558 04:16:29 -- target/invalid.sh@25 -- # printf %x 72 00:19:15.558 04:16:29 -- target/invalid.sh@25 -- # echo -e '\x48' 00:19:15.558 04:16:29 -- target/invalid.sh@25 -- # string+=H 00:19:15.558 04:16:29 -- target/invalid.sh@24 -- # (( ll++ )) 00:19:15.558 04:16:29 -- target/invalid.sh@24 -- # (( ll < length )) 00:19:15.558 04:16:29 -- target/invalid.sh@25 -- # printf %x 46 00:19:15.558 04:16:29 -- target/invalid.sh@25 -- # echo -e '\x2e' 00:19:15.558 04:16:29 -- target/invalid.sh@25 -- # string+=. 00:19:15.558 04:16:29 -- target/invalid.sh@24 -- # (( ll++ )) 00:19:15.558 04:16:29 -- target/invalid.sh@24 -- # (( ll < length )) 00:19:15.558 04:16:29 -- target/invalid.sh@25 -- # printf %x 88 00:19:15.558 04:16:29 -- target/invalid.sh@25 -- # echo -e '\x58' 00:19:15.558 04:16:29 -- target/invalid.sh@25 -- # string+=X 00:19:15.558 04:16:29 -- target/invalid.sh@24 -- # (( ll++ )) 00:19:15.558 04:16:29 -- target/invalid.sh@24 -- # (( ll < length )) 00:19:15.558 04:16:29 -- target/invalid.sh@25 -- # printf %x 59 00:19:15.558 04:16:29 -- target/invalid.sh@25 -- # echo -e '\x3b' 00:19:15.558 04:16:29 -- target/invalid.sh@25 -- # string+=';' 00:19:15.558 04:16:29 -- target/invalid.sh@24 -- # (( ll++ )) 00:19:15.558 04:16:29 -- target/invalid.sh@24 -- # (( ll < length )) 00:19:15.558 04:16:29 -- target/invalid.sh@25 -- # printf %x 92 00:19:15.558 04:16:29 -- target/invalid.sh@25 -- # echo -e '\x5c' 00:19:15.558 04:16:29 -- target/invalid.sh@25 -- # string+='\' 00:19:15.558 04:16:29 -- target/invalid.sh@24 -- # (( ll++ )) 00:19:15.558 04:16:29 -- target/invalid.sh@24 -- # (( ll < length )) 00:19:15.559 04:16:29 -- target/invalid.sh@28 -- # [[ ] == \- ]] 00:19:15.559 04:16:29 -- target/invalid.sh@31 -- # echo ']/_ N;R2e"s;/F-lFiLg8>0rZGbT:MbWBe43H.X;\' 00:19:15.559 04:16:29 -- target/invalid.sh@58 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d ']/_ N;R2e"s;/F-lFiLg8>0rZGbT:MbWBe43H.X;\' nqn.2016-06.io.spdk:cnode5783 00:19:15.559 [2024-05-14 04:16:30.045428] nvmf_rpc.c: 427:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode5783: invalid model number ']/_ N;R2e"s;/F-lFiLg8>0rZGbT:MbWBe43H.X;\' 00:19:15.559 04:16:30 -- target/invalid.sh@58 -- # out='request: 00:19:15.559 { 00:19:15.559 "nqn": "nqn.2016-06.io.spdk:cnode5783", 00:19:15.559 "model_number": "]/_ N;R2e\"s;/F-lFiLg8>0rZGbT:MbWBe43H.X;\\", 00:19:15.559 "method": "nvmf_create_subsystem", 00:19:15.559 "req_id": 1 00:19:15.559 } 00:19:15.559 Got JSON-RPC error response 00:19:15.559 response: 00:19:15.559 { 00:19:15.559 "code": -32602, 00:19:15.559 "message": "Invalid MN ]/_ N;R2e\"s;/F-lFiLg8>0rZGbT:MbWBe43H.X;\\" 00:19:15.559 }' 00:19:15.559 04:16:30 -- target/invalid.sh@59 -- # [[ request: 00:19:15.559 { 00:19:15.559 "nqn": "nqn.2016-06.io.spdk:cnode5783", 00:19:15.559 "model_number": "]/_ N;R2e\"s;/F-lFiLg8>0rZGbT:MbWBe43H.X;\\", 00:19:15.559 "method": "nvmf_create_subsystem", 00:19:15.559 "req_id": 1 00:19:15.559 } 00:19:15.559 Got JSON-RPC error response 00:19:15.559 response: 00:19:15.559 { 00:19:15.559 "code": -32602, 00:19:15.559 "message": "Invalid MN ]/_ N;R2e\"s;/F-lFiLg8>0rZGbT:MbWBe43H.X;\\" 00:19:15.559 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:19:15.559 04:16:30 -- target/invalid.sh@62 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport --trtype tcp 00:19:15.819 [2024-05-14 04:16:30.197674] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:15.819 04:16:30 -- target/invalid.sh@63 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode -s SPDK001 -a 00:19:15.819 04:16:30 -- target/invalid.sh@64 -- # [[ tcp == \T\C\P ]] 00:19:15.819 04:16:30 -- target/invalid.sh@67 -- # echo '' 00:19:15.819 04:16:30 -- target/invalid.sh@67 -- # head -n 1 00:19:15.819 04:16:30 -- target/invalid.sh@67 -- # IP= 00:19:15.819 04:16:30 -- target/invalid.sh@69 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode -t tcp -a '' -s 4421 00:19:16.078 [2024-05-14 04:16:30.518031] nvmf_rpc.c: 783:nvmf_rpc_listen_paused: *ERROR*: Unable to remove listener, rc -2 00:19:16.078 04:16:30 -- target/invalid.sh@69 -- # out='request: 00:19:16.078 { 00:19:16.078 "nqn": "nqn.2016-06.io.spdk:cnode", 00:19:16.078 "listen_address": { 00:19:16.078 "trtype": "tcp", 00:19:16.078 "traddr": "", 00:19:16.078 "trsvcid": "4421" 00:19:16.078 }, 00:19:16.078 "method": "nvmf_subsystem_remove_listener", 00:19:16.078 "req_id": 1 00:19:16.078 } 00:19:16.078 Got JSON-RPC error response 00:19:16.078 response: 00:19:16.079 { 00:19:16.079 "code": -32602, 00:19:16.079 "message": "Invalid parameters" 00:19:16.079 }' 00:19:16.079 04:16:30 -- target/invalid.sh@70 -- # [[ request: 00:19:16.079 { 00:19:16.079 "nqn": "nqn.2016-06.io.spdk:cnode", 00:19:16.079 "listen_address": { 00:19:16.079 "trtype": "tcp", 00:19:16.079 "traddr": "", 00:19:16.079 "trsvcid": "4421" 00:19:16.079 }, 00:19:16.079 "method": "nvmf_subsystem_remove_listener", 00:19:16.079 "req_id": 1 00:19:16.079 } 00:19:16.079 Got JSON-RPC error response 00:19:16.079 response: 00:19:16.079 { 00:19:16.079 "code": -32602, 00:19:16.079 "message": "Invalid parameters" 00:19:16.079 } != *\U\n\a\b\l\e\ \t\o\ \s\t\o\p\ \l\i\s\t\e\n\e\r\.* ]] 00:19:16.079 04:16:30 -- target/invalid.sh@73 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode28773 -i 0 00:19:16.337 [2024-05-14 04:16:30.682166] nvmf_rpc.c: 439:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode28773: invalid cntlid range [0-65519] 00:19:16.337 04:16:30 -- target/invalid.sh@73 -- # out='request: 00:19:16.337 { 00:19:16.337 "nqn": "nqn.2016-06.io.spdk:cnode28773", 00:19:16.337 "min_cntlid": 0, 00:19:16.337 "method": "nvmf_create_subsystem", 00:19:16.337 "req_id": 1 00:19:16.337 } 00:19:16.337 Got JSON-RPC error response 00:19:16.337 response: 00:19:16.337 { 00:19:16.337 "code": -32602, 00:19:16.337 "message": "Invalid cntlid range [0-65519]" 00:19:16.337 }' 00:19:16.337 04:16:30 -- target/invalid.sh@74 -- # [[ request: 00:19:16.337 { 00:19:16.337 "nqn": "nqn.2016-06.io.spdk:cnode28773", 00:19:16.337 "min_cntlid": 0, 00:19:16.337 "method": "nvmf_create_subsystem", 00:19:16.337 "req_id": 1 00:19:16.337 } 00:19:16.337 Got JSON-RPC error response 00:19:16.337 response: 00:19:16.337 { 00:19:16.337 "code": -32602, 00:19:16.337 "message": "Invalid cntlid range [0-65519]" 00:19:16.337 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:19:16.337 04:16:30 -- target/invalid.sh@75 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode32752 -i 65520 00:19:16.337 [2024-05-14 04:16:30.838400] nvmf_rpc.c: 439:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode32752: invalid cntlid range [65520-65519] 00:19:16.337 04:16:30 -- target/invalid.sh@75 -- # out='request: 00:19:16.337 { 00:19:16.337 "nqn": "nqn.2016-06.io.spdk:cnode32752", 00:19:16.337 "min_cntlid": 65520, 00:19:16.337 "method": "nvmf_create_subsystem", 00:19:16.337 "req_id": 1 00:19:16.337 } 00:19:16.337 Got JSON-RPC error response 00:19:16.337 response: 00:19:16.337 { 00:19:16.337 "code": -32602, 00:19:16.337 "message": "Invalid cntlid range [65520-65519]" 00:19:16.337 }' 00:19:16.337 04:16:30 -- target/invalid.sh@76 -- # [[ request: 00:19:16.337 { 00:19:16.337 "nqn": "nqn.2016-06.io.spdk:cnode32752", 00:19:16.337 "min_cntlid": 65520, 00:19:16.337 "method": "nvmf_create_subsystem", 00:19:16.337 "req_id": 1 00:19:16.337 } 00:19:16.337 Got JSON-RPC error response 00:19:16.337 response: 00:19:16.337 { 00:19:16.337 "code": -32602, 00:19:16.337 "message": "Invalid cntlid range [65520-65519]" 00:19:16.337 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:19:16.337 04:16:30 -- target/invalid.sh@77 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1708 -I 0 00:19:16.596 [2024-05-14 04:16:30.994590] nvmf_rpc.c: 439:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode1708: invalid cntlid range [1-0] 00:19:16.596 04:16:31 -- target/invalid.sh@77 -- # out='request: 00:19:16.596 { 00:19:16.596 "nqn": "nqn.2016-06.io.spdk:cnode1708", 00:19:16.596 "max_cntlid": 0, 00:19:16.596 "method": "nvmf_create_subsystem", 00:19:16.596 "req_id": 1 00:19:16.596 } 00:19:16.596 Got JSON-RPC error response 00:19:16.596 response: 00:19:16.596 { 00:19:16.596 "code": -32602, 00:19:16.596 "message": "Invalid cntlid range [1-0]" 00:19:16.596 }' 00:19:16.596 04:16:31 -- target/invalid.sh@78 -- # [[ request: 00:19:16.596 { 00:19:16.596 "nqn": "nqn.2016-06.io.spdk:cnode1708", 00:19:16.596 "max_cntlid": 0, 00:19:16.596 "method": "nvmf_create_subsystem", 00:19:16.596 "req_id": 1 00:19:16.596 } 00:19:16.596 Got JSON-RPC error response 00:19:16.596 response: 00:19:16.596 { 00:19:16.596 "code": -32602, 00:19:16.596 "message": "Invalid cntlid range [1-0]" 00:19:16.596 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:19:16.596 04:16:31 -- target/invalid.sh@79 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode6737 -I 65520 00:19:16.596 [2024-05-14 04:16:31.134782] nvmf_rpc.c: 439:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode6737: invalid cntlid range [1-65520] 00:19:16.596 04:16:31 -- target/invalid.sh@79 -- # out='request: 00:19:16.596 { 00:19:16.596 "nqn": "nqn.2016-06.io.spdk:cnode6737", 00:19:16.596 "max_cntlid": 65520, 00:19:16.596 "method": "nvmf_create_subsystem", 00:19:16.596 "req_id": 1 00:19:16.596 } 00:19:16.596 Got JSON-RPC error response 00:19:16.596 response: 00:19:16.596 { 00:19:16.596 "code": -32602, 00:19:16.596 "message": "Invalid cntlid range [1-65520]" 00:19:16.596 }' 00:19:16.596 04:16:31 -- target/invalid.sh@80 -- # [[ request: 00:19:16.596 { 00:19:16.596 "nqn": "nqn.2016-06.io.spdk:cnode6737", 00:19:16.596 "max_cntlid": 65520, 00:19:16.596 "method": "nvmf_create_subsystem", 00:19:16.596 "req_id": 1 00:19:16.596 } 00:19:16.596 Got JSON-RPC error response 00:19:16.596 response: 00:19:16.596 { 00:19:16.596 "code": -32602, 00:19:16.596 "message": "Invalid cntlid range [1-65520]" 00:19:16.596 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:19:16.596 04:16:31 -- target/invalid.sh@83 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode8562 -i 6 -I 5 00:19:16.855 [2024-05-14 04:16:31.275002] nvmf_rpc.c: 439:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode8562: invalid cntlid range [6-5] 00:19:16.855 04:16:31 -- target/invalid.sh@83 -- # out='request: 00:19:16.855 { 00:19:16.855 "nqn": "nqn.2016-06.io.spdk:cnode8562", 00:19:16.855 "min_cntlid": 6, 00:19:16.855 "max_cntlid": 5, 00:19:16.855 "method": "nvmf_create_subsystem", 00:19:16.855 "req_id": 1 00:19:16.855 } 00:19:16.855 Got JSON-RPC error response 00:19:16.855 response: 00:19:16.855 { 00:19:16.855 "code": -32602, 00:19:16.855 "message": "Invalid cntlid range [6-5]" 00:19:16.855 }' 00:19:16.856 04:16:31 -- target/invalid.sh@84 -- # [[ request: 00:19:16.856 { 00:19:16.856 "nqn": "nqn.2016-06.io.spdk:cnode8562", 00:19:16.856 "min_cntlid": 6, 00:19:16.856 "max_cntlid": 5, 00:19:16.856 "method": "nvmf_create_subsystem", 00:19:16.856 "req_id": 1 00:19:16.856 } 00:19:16.856 Got JSON-RPC error response 00:19:16.856 response: 00:19:16.856 { 00:19:16.856 "code": -32602, 00:19:16.856 "message": "Invalid cntlid range [6-5]" 00:19:16.856 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:19:16.856 04:16:31 -- target/invalid.sh@87 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target --name foobar 00:19:16.856 04:16:31 -- target/invalid.sh@87 -- # out='request: 00:19:16.856 { 00:19:16.856 "name": "foobar", 00:19:16.856 "method": "nvmf_delete_target", 00:19:16.856 "req_id": 1 00:19:16.856 } 00:19:16.856 Got JSON-RPC error response 00:19:16.856 response: 00:19:16.856 { 00:19:16.856 "code": -32602, 00:19:16.856 "message": "The specified target doesn'\''t exist, cannot delete it." 00:19:16.856 }' 00:19:16.856 04:16:31 -- target/invalid.sh@88 -- # [[ request: 00:19:16.856 { 00:19:16.856 "name": "foobar", 00:19:16.856 "method": "nvmf_delete_target", 00:19:16.856 "req_id": 1 00:19:16.856 } 00:19:16.856 Got JSON-RPC error response 00:19:16.856 response: 00:19:16.856 { 00:19:16.856 "code": -32602, 00:19:16.856 "message": "The specified target doesn't exist, cannot delete it." 00:19:16.856 } == *\T\h\e\ \s\p\e\c\i\f\i\e\d\ \t\a\r\g\e\t\ \d\o\e\s\n\'\t\ \e\x\i\s\t\,\ \c\a\n\n\o\t\ \d\e\l\e\t\e\ \i\t\.* ]] 00:19:16.856 04:16:31 -- target/invalid.sh@90 -- # trap - SIGINT SIGTERM EXIT 00:19:16.856 04:16:31 -- target/invalid.sh@91 -- # nvmftestfini 00:19:16.856 04:16:31 -- nvmf/common.sh@476 -- # nvmfcleanup 00:19:16.856 04:16:31 -- nvmf/common.sh@116 -- # sync 00:19:16.856 04:16:31 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:19:16.856 04:16:31 -- nvmf/common.sh@119 -- # set +e 00:19:16.856 04:16:31 -- nvmf/common.sh@120 -- # for i in {1..20} 00:19:16.856 04:16:31 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:19:16.856 rmmod nvme_tcp 00:19:16.856 rmmod nvme_fabrics 00:19:16.856 rmmod nvme_keyring 00:19:16.856 04:16:31 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:19:16.856 04:16:31 -- nvmf/common.sh@123 -- # set -e 00:19:16.856 04:16:31 -- nvmf/common.sh@124 -- # return 0 00:19:16.856 04:16:31 -- nvmf/common.sh@477 -- # '[' -n 3991291 ']' 00:19:16.856 04:16:31 -- nvmf/common.sh@478 -- # killprocess 3991291 00:19:16.856 04:16:31 -- common/autotest_common.sh@926 -- # '[' -z 3991291 ']' 00:19:16.856 04:16:31 -- common/autotest_common.sh@930 -- # kill -0 3991291 00:19:16.856 04:16:31 -- common/autotest_common.sh@931 -- # uname 00:19:16.856 04:16:31 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:19:17.116 04:16:31 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3991291 00:19:17.116 04:16:31 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:19:17.116 04:16:31 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:19:17.116 04:16:31 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3991291' 00:19:17.116 killing process with pid 3991291 00:19:17.116 04:16:31 -- common/autotest_common.sh@945 -- # kill 3991291 00:19:17.116 04:16:31 -- common/autotest_common.sh@950 -- # wait 3991291 00:19:17.377 04:16:31 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:19:17.377 04:16:31 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:19:17.377 04:16:31 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:19:17.377 04:16:31 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:17.377 04:16:31 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:19:17.377 04:16:31 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:17.377 04:16:31 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:17.377 04:16:31 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:19.914 04:16:34 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:19:19.914 00:19:19.914 real 0m11.664s 00:19:19.914 user 0m17.463s 00:19:19.914 sys 0m5.120s 00:19:19.914 04:16:34 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:19.914 04:16:34 -- common/autotest_common.sh@10 -- # set +x 00:19:19.914 ************************************ 00:19:19.914 END TEST nvmf_invalid 00:19:19.914 ************************************ 00:19:19.914 04:16:34 -- nvmf/nvmf.sh@31 -- # run_test nvmf_abort /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:19:19.914 04:16:34 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:19:19.914 04:16:34 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:19:19.914 04:16:34 -- common/autotest_common.sh@10 -- # set +x 00:19:19.914 ************************************ 00:19:19.914 START TEST nvmf_abort 00:19:19.914 ************************************ 00:19:19.914 04:16:34 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:19:19.914 * Looking for test storage... 00:19:19.914 * Found test storage at /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target 00:19:19.914 04:16:34 -- target/abort.sh@9 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/common.sh 00:19:19.914 04:16:34 -- nvmf/common.sh@7 -- # uname -s 00:19:19.914 04:16:34 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:19.914 04:16:34 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:19.914 04:16:34 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:19.914 04:16:34 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:19.914 04:16:34 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:19.914 04:16:34 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:19.914 04:16:34 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:19.914 04:16:34 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:19.914 04:16:34 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:19.914 04:16:34 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:19.914 04:16:34 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80ef6226-405e-ee11-906e-a4bf01973fda 00:19:19.914 04:16:34 -- nvmf/common.sh@18 -- # NVME_HOSTID=80ef6226-405e-ee11-906e-a4bf01973fda 00:19:19.914 04:16:34 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:19.914 04:16:34 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:19.914 04:16:34 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:19:19.914 04:16:34 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/common.sh 00:19:19.914 04:16:34 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:19.914 04:16:34 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:19.914 04:16:34 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:19.914 04:16:34 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:19.914 04:16:34 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:19.914 04:16:34 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:19.914 04:16:34 -- paths/export.sh@5 -- # export PATH 00:19:19.914 04:16:34 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:19.914 04:16:34 -- nvmf/common.sh@46 -- # : 0 00:19:19.914 04:16:34 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:19:19.914 04:16:34 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:19:19.914 04:16:34 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:19:19.914 04:16:34 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:19.914 04:16:34 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:19.914 04:16:34 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:19:19.914 04:16:34 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:19:19.914 04:16:34 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:19:19.914 04:16:34 -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:19:19.914 04:16:34 -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:19:19.914 04:16:34 -- target/abort.sh@14 -- # nvmftestinit 00:19:19.914 04:16:34 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:19:19.914 04:16:34 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:19.914 04:16:34 -- nvmf/common.sh@436 -- # prepare_net_devs 00:19:19.914 04:16:34 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:19:19.914 04:16:34 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:19:19.914 04:16:34 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:19.914 04:16:34 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:19.914 04:16:34 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:19.914 04:16:34 -- nvmf/common.sh@402 -- # [[ phy-fallback != virt ]] 00:19:19.914 04:16:34 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:19:19.914 04:16:34 -- nvmf/common.sh@284 -- # xtrace_disable 00:19:19.914 04:16:34 -- common/autotest_common.sh@10 -- # set +x 00:19:25.187 04:16:39 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:19:25.187 04:16:39 -- nvmf/common.sh@290 -- # pci_devs=() 00:19:25.187 04:16:39 -- nvmf/common.sh@290 -- # local -a pci_devs 00:19:25.187 04:16:39 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:19:25.187 04:16:39 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:19:25.187 04:16:39 -- nvmf/common.sh@292 -- # pci_drivers=() 00:19:25.187 04:16:39 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:19:25.187 04:16:39 -- nvmf/common.sh@294 -- # net_devs=() 00:19:25.187 04:16:39 -- nvmf/common.sh@294 -- # local -ga net_devs 00:19:25.187 04:16:39 -- nvmf/common.sh@295 -- # e810=() 00:19:25.187 04:16:39 -- nvmf/common.sh@295 -- # local -ga e810 00:19:25.187 04:16:39 -- nvmf/common.sh@296 -- # x722=() 00:19:25.187 04:16:39 -- nvmf/common.sh@296 -- # local -ga x722 00:19:25.187 04:16:39 -- nvmf/common.sh@297 -- # mlx=() 00:19:25.187 04:16:39 -- nvmf/common.sh@297 -- # local -ga mlx 00:19:25.187 04:16:39 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:25.187 04:16:39 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:25.187 04:16:39 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:25.187 04:16:39 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:25.187 04:16:39 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:25.187 04:16:39 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:25.188 04:16:39 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:25.188 04:16:39 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:25.188 04:16:39 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:25.188 04:16:39 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:25.188 04:16:39 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:25.188 04:16:39 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:19:25.188 04:16:39 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:19:25.188 04:16:39 -- nvmf/common.sh@326 -- # [[ '' == mlx5 ]] 00:19:25.188 04:16:39 -- nvmf/common.sh@328 -- # [[ '' == e810 ]] 00:19:25.188 04:16:39 -- nvmf/common.sh@330 -- # [[ '' == x722 ]] 00:19:25.188 04:16:39 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:19:25.188 04:16:39 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:19:25.188 04:16:39 -- nvmf/common.sh@340 -- # echo 'Found 0000:27:00.0 (0x8086 - 0x159b)' 00:19:25.188 Found 0000:27:00.0 (0x8086 - 0x159b) 00:19:25.188 04:16:39 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:19:25.188 04:16:39 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:19:25.188 04:16:39 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:25.188 04:16:39 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:25.188 04:16:39 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:19:25.188 04:16:39 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:19:25.188 04:16:39 -- nvmf/common.sh@340 -- # echo 'Found 0000:27:00.1 (0x8086 - 0x159b)' 00:19:25.188 Found 0000:27:00.1 (0x8086 - 0x159b) 00:19:25.188 04:16:39 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:19:25.188 04:16:39 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:19:25.188 04:16:39 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:25.188 04:16:39 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:25.188 04:16:39 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:19:25.188 04:16:39 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:19:25.188 04:16:39 -- nvmf/common.sh@371 -- # [[ '' == e810 ]] 00:19:25.188 04:16:39 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:19:25.188 04:16:39 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:25.188 04:16:39 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:19:25.188 04:16:39 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:25.188 04:16:39 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:27:00.0: cvl_0_0' 00:19:25.188 Found net devices under 0000:27:00.0: cvl_0_0 00:19:25.188 04:16:39 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:19:25.188 04:16:39 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:19:25.188 04:16:39 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:25.188 04:16:39 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:19:25.188 04:16:39 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:25.188 04:16:39 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:27:00.1: cvl_0_1' 00:19:25.188 Found net devices under 0000:27:00.1: cvl_0_1 00:19:25.188 04:16:39 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:19:25.188 04:16:39 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:19:25.188 04:16:39 -- nvmf/common.sh@402 -- # is_hw=yes 00:19:25.188 04:16:39 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:19:25.188 04:16:39 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:19:25.188 04:16:39 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:19:25.188 04:16:39 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:25.188 04:16:39 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:25.188 04:16:39 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:25.188 04:16:39 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:19:25.188 04:16:39 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:25.188 04:16:39 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:25.188 04:16:39 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:19:25.188 04:16:39 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:25.188 04:16:39 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:25.188 04:16:39 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:19:25.188 04:16:39 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:19:25.188 04:16:39 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:19:25.188 04:16:39 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:25.188 04:16:39 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:25.188 04:16:39 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:25.188 04:16:39 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:19:25.188 04:16:39 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:25.188 04:16:39 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:25.188 04:16:39 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:25.188 04:16:39 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:19:25.188 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:25.188 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.371 ms 00:19:25.188 00:19:25.188 --- 10.0.0.2 ping statistics --- 00:19:25.188 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:25.188 rtt min/avg/max/mdev = 0.371/0.371/0.371/0.000 ms 00:19:25.188 04:16:39 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:25.188 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:25.188 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.208 ms 00:19:25.188 00:19:25.188 --- 10.0.0.1 ping statistics --- 00:19:25.188 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:25.188 rtt min/avg/max/mdev = 0.208/0.208/0.208/0.000 ms 00:19:25.188 04:16:39 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:25.188 04:16:39 -- nvmf/common.sh@410 -- # return 0 00:19:25.188 04:16:39 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:19:25.188 04:16:39 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:25.188 04:16:39 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:19:25.188 04:16:39 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:19:25.188 04:16:39 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:25.188 04:16:39 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:19:25.188 04:16:39 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:19:25.188 04:16:39 -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:19:25.188 04:16:39 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:19:25.188 04:16:39 -- common/autotest_common.sh@712 -- # xtrace_disable 00:19:25.188 04:16:39 -- common/autotest_common.sh@10 -- # set +x 00:19:25.188 04:16:39 -- nvmf/common.sh@469 -- # nvmfpid=3995979 00:19:25.188 04:16:39 -- nvmf/common.sh@470 -- # waitforlisten 3995979 00:19:25.188 04:16:39 -- common/autotest_common.sh@819 -- # '[' -z 3995979 ']' 00:19:25.188 04:16:39 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:25.188 04:16:39 -- common/autotest_common.sh@824 -- # local max_retries=100 00:19:25.188 04:16:39 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:25.188 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:25.188 04:16:39 -- common/autotest_common.sh@828 -- # xtrace_disable 00:19:25.188 04:16:39 -- common/autotest_common.sh@10 -- # set +x 00:19:25.188 04:16:39 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:19:25.188 [2024-05-14 04:16:39.502234] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:19:25.188 [2024-05-14 04:16:39.502346] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:25.188 EAL: No free 2048 kB hugepages reported on node 1 00:19:25.188 [2024-05-14 04:16:39.625651] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:19:25.188 [2024-05-14 04:16:39.723093] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:19:25.188 [2024-05-14 04:16:39.723283] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:25.188 [2024-05-14 04:16:39.723298] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:25.188 [2024-05-14 04:16:39.723307] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:25.188 [2024-05-14 04:16:39.723378] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:19:25.188 [2024-05-14 04:16:39.723482] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:19:25.188 [2024-05-14 04:16:39.723492] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:19:25.756 04:16:40 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:19:25.756 04:16:40 -- common/autotest_common.sh@852 -- # return 0 00:19:25.756 04:16:40 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:19:25.756 04:16:40 -- common/autotest_common.sh@718 -- # xtrace_disable 00:19:25.756 04:16:40 -- common/autotest_common.sh@10 -- # set +x 00:19:25.756 04:16:40 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:25.756 04:16:40 -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:19:25.756 04:16:40 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:25.756 04:16:40 -- common/autotest_common.sh@10 -- # set +x 00:19:25.756 [2024-05-14 04:16:40.237530] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:25.756 04:16:40 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:25.756 04:16:40 -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:19:25.756 04:16:40 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:25.756 04:16:40 -- common/autotest_common.sh@10 -- # set +x 00:19:25.756 Malloc0 00:19:25.756 04:16:40 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:25.756 04:16:40 -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:19:25.756 04:16:40 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:25.756 04:16:40 -- common/autotest_common.sh@10 -- # set +x 00:19:25.756 Delay0 00:19:25.756 04:16:40 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:25.756 04:16:40 -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:19:25.756 04:16:40 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:25.756 04:16:40 -- common/autotest_common.sh@10 -- # set +x 00:19:25.756 04:16:40 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:25.756 04:16:40 -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:19:25.756 04:16:40 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:25.756 04:16:40 -- common/autotest_common.sh@10 -- # set +x 00:19:25.756 04:16:40 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:25.756 04:16:40 -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:19:25.756 04:16:40 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:25.756 04:16:40 -- common/autotest_common.sh@10 -- # set +x 00:19:25.756 [2024-05-14 04:16:40.325308] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:25.756 04:16:40 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:25.756 04:16:40 -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:19:25.756 04:16:40 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:25.756 04:16:40 -- common/autotest_common.sh@10 -- # set +x 00:19:25.756 04:16:40 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:25.756 04:16:40 -- target/abort.sh@30 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:19:26.016 EAL: No free 2048 kB hugepages reported on node 1 00:19:26.016 [2024-05-14 04:16:40.468901] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:19:28.554 Initializing NVMe Controllers 00:19:28.554 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:19:28.554 controller IO queue size 128 less than required 00:19:28.554 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:19:28.554 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:19:28.554 Initialization complete. Launching workers. 00:19:28.554 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 47362 00:19:28.554 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 47423, failed to submit 62 00:19:28.554 success 47362, unsuccess 61, failed 0 00:19:28.554 04:16:42 -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:19:28.554 04:16:42 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:28.554 04:16:42 -- common/autotest_common.sh@10 -- # set +x 00:19:28.554 04:16:42 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:28.554 04:16:42 -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:19:28.554 04:16:42 -- target/abort.sh@38 -- # nvmftestfini 00:19:28.554 04:16:42 -- nvmf/common.sh@476 -- # nvmfcleanup 00:19:28.554 04:16:42 -- nvmf/common.sh@116 -- # sync 00:19:28.554 04:16:42 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:19:28.554 04:16:42 -- nvmf/common.sh@119 -- # set +e 00:19:28.554 04:16:42 -- nvmf/common.sh@120 -- # for i in {1..20} 00:19:28.554 04:16:42 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:19:28.554 rmmod nvme_tcp 00:19:28.554 rmmod nvme_fabrics 00:19:28.554 rmmod nvme_keyring 00:19:28.554 04:16:42 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:19:28.554 04:16:42 -- nvmf/common.sh@123 -- # set -e 00:19:28.554 04:16:42 -- nvmf/common.sh@124 -- # return 0 00:19:28.554 04:16:42 -- nvmf/common.sh@477 -- # '[' -n 3995979 ']' 00:19:28.554 04:16:42 -- nvmf/common.sh@478 -- # killprocess 3995979 00:19:28.554 04:16:42 -- common/autotest_common.sh@926 -- # '[' -z 3995979 ']' 00:19:28.554 04:16:42 -- common/autotest_common.sh@930 -- # kill -0 3995979 00:19:28.554 04:16:42 -- common/autotest_common.sh@931 -- # uname 00:19:28.554 04:16:42 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:19:28.554 04:16:42 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3995979 00:19:28.554 04:16:42 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:19:28.554 04:16:42 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:19:28.554 04:16:42 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3995979' 00:19:28.554 killing process with pid 3995979 00:19:28.554 04:16:42 -- common/autotest_common.sh@945 -- # kill 3995979 00:19:28.554 04:16:42 -- common/autotest_common.sh@950 -- # wait 3995979 00:19:28.812 04:16:43 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:19:28.812 04:16:43 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:19:28.812 04:16:43 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:19:28.812 04:16:43 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:28.812 04:16:43 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:19:28.812 04:16:43 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:28.812 04:16:43 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:28.812 04:16:43 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:30.727 04:16:45 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:19:30.727 00:19:30.727 real 0m11.228s 00:19:30.727 user 0m13.716s 00:19:30.727 sys 0m4.534s 00:19:30.727 04:16:45 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:30.727 04:16:45 -- common/autotest_common.sh@10 -- # set +x 00:19:30.727 ************************************ 00:19:30.727 END TEST nvmf_abort 00:19:30.727 ************************************ 00:19:30.986 04:16:45 -- nvmf/nvmf.sh@32 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:19:30.986 04:16:45 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:19:30.986 04:16:45 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:19:30.986 04:16:45 -- common/autotest_common.sh@10 -- # set +x 00:19:30.986 ************************************ 00:19:30.986 START TEST nvmf_ns_hotplug_stress 00:19:30.986 ************************************ 00:19:30.986 04:16:45 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:19:30.986 * Looking for test storage... 00:19:30.986 * Found test storage at /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target 00:19:30.986 04:16:45 -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/common.sh 00:19:30.986 04:16:45 -- nvmf/common.sh@7 -- # uname -s 00:19:30.986 04:16:45 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:30.986 04:16:45 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:30.986 04:16:45 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:30.986 04:16:45 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:30.986 04:16:45 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:30.986 04:16:45 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:30.986 04:16:45 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:30.986 04:16:45 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:30.986 04:16:45 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:30.986 04:16:45 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:30.986 04:16:45 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80ef6226-405e-ee11-906e-a4bf01973fda 00:19:30.986 04:16:45 -- nvmf/common.sh@18 -- # NVME_HOSTID=80ef6226-405e-ee11-906e-a4bf01973fda 00:19:30.986 04:16:45 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:30.986 04:16:45 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:30.986 04:16:45 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:19:30.986 04:16:45 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/common.sh 00:19:30.986 04:16:45 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:30.986 04:16:45 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:30.986 04:16:45 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:30.986 04:16:45 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:30.986 04:16:45 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:30.986 04:16:45 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:30.986 04:16:45 -- paths/export.sh@5 -- # export PATH 00:19:30.986 04:16:45 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:30.986 04:16:45 -- nvmf/common.sh@46 -- # : 0 00:19:30.986 04:16:45 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:19:30.986 04:16:45 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:19:30.986 04:16:45 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:19:30.986 04:16:45 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:30.986 04:16:45 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:30.986 04:16:45 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:19:30.986 04:16:45 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:19:30.986 04:16:45 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:19:30.986 04:16:45 -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py 00:19:30.986 04:16:45 -- target/ns_hotplug_stress.sh@13 -- # nvmftestinit 00:19:30.986 04:16:45 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:19:30.986 04:16:45 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:30.986 04:16:45 -- nvmf/common.sh@436 -- # prepare_net_devs 00:19:30.986 04:16:45 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:19:30.986 04:16:45 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:19:30.986 04:16:45 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:30.986 04:16:45 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:30.986 04:16:45 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:30.986 04:16:45 -- nvmf/common.sh@402 -- # [[ phy-fallback != virt ]] 00:19:30.986 04:16:45 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:19:30.986 04:16:45 -- nvmf/common.sh@284 -- # xtrace_disable 00:19:30.986 04:16:45 -- common/autotest_common.sh@10 -- # set +x 00:19:37.565 04:16:51 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:19:37.565 04:16:51 -- nvmf/common.sh@290 -- # pci_devs=() 00:19:37.565 04:16:51 -- nvmf/common.sh@290 -- # local -a pci_devs 00:19:37.565 04:16:51 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:19:37.565 04:16:51 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:19:37.565 04:16:51 -- nvmf/common.sh@292 -- # pci_drivers=() 00:19:37.565 04:16:51 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:19:37.565 04:16:51 -- nvmf/common.sh@294 -- # net_devs=() 00:19:37.565 04:16:51 -- nvmf/common.sh@294 -- # local -ga net_devs 00:19:37.565 04:16:51 -- nvmf/common.sh@295 -- # e810=() 00:19:37.565 04:16:51 -- nvmf/common.sh@295 -- # local -ga e810 00:19:37.565 04:16:51 -- nvmf/common.sh@296 -- # x722=() 00:19:37.565 04:16:51 -- nvmf/common.sh@296 -- # local -ga x722 00:19:37.565 04:16:51 -- nvmf/common.sh@297 -- # mlx=() 00:19:37.565 04:16:51 -- nvmf/common.sh@297 -- # local -ga mlx 00:19:37.565 04:16:51 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:37.565 04:16:51 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:37.565 04:16:51 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:37.565 04:16:51 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:37.565 04:16:51 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:37.565 04:16:51 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:37.565 04:16:51 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:37.565 04:16:51 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:37.565 04:16:51 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:37.565 04:16:51 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:37.565 04:16:51 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:37.565 04:16:51 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:19:37.565 04:16:51 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:19:37.565 04:16:51 -- nvmf/common.sh@326 -- # [[ '' == mlx5 ]] 00:19:37.565 04:16:51 -- nvmf/common.sh@328 -- # [[ '' == e810 ]] 00:19:37.565 04:16:51 -- nvmf/common.sh@330 -- # [[ '' == x722 ]] 00:19:37.565 04:16:51 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:19:37.565 04:16:51 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:19:37.565 04:16:51 -- nvmf/common.sh@340 -- # echo 'Found 0000:27:00.0 (0x8086 - 0x159b)' 00:19:37.565 Found 0000:27:00.0 (0x8086 - 0x159b) 00:19:37.565 04:16:51 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:19:37.565 04:16:51 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:19:37.565 04:16:51 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:37.565 04:16:51 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:37.565 04:16:51 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:19:37.565 04:16:51 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:19:37.565 04:16:51 -- nvmf/common.sh@340 -- # echo 'Found 0000:27:00.1 (0x8086 - 0x159b)' 00:19:37.565 Found 0000:27:00.1 (0x8086 - 0x159b) 00:19:37.565 04:16:51 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:19:37.565 04:16:51 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:19:37.565 04:16:51 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:37.565 04:16:51 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:37.565 04:16:51 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:19:37.565 04:16:51 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:19:37.565 04:16:51 -- nvmf/common.sh@371 -- # [[ '' == e810 ]] 00:19:37.566 04:16:51 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:19:37.566 04:16:51 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:37.566 04:16:51 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:19:37.566 04:16:51 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:37.566 04:16:51 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:27:00.0: cvl_0_0' 00:19:37.566 Found net devices under 0000:27:00.0: cvl_0_0 00:19:37.566 04:16:51 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:19:37.566 04:16:51 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:19:37.566 04:16:51 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:37.566 04:16:51 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:19:37.566 04:16:51 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:37.566 04:16:51 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:27:00.1: cvl_0_1' 00:19:37.566 Found net devices under 0000:27:00.1: cvl_0_1 00:19:37.566 04:16:51 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:19:37.566 04:16:51 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:19:37.566 04:16:51 -- nvmf/common.sh@402 -- # is_hw=yes 00:19:37.566 04:16:51 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:19:37.566 04:16:51 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:19:37.566 04:16:51 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:19:37.566 04:16:51 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:37.566 04:16:51 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:37.566 04:16:51 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:37.566 04:16:51 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:19:37.566 04:16:51 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:37.566 04:16:51 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:37.566 04:16:51 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:19:37.566 04:16:51 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:37.566 04:16:51 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:37.566 04:16:51 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:19:37.566 04:16:51 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:19:37.566 04:16:51 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:19:37.566 04:16:51 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:37.566 04:16:51 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:37.566 04:16:51 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:37.566 04:16:51 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:19:37.566 04:16:51 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:37.566 04:16:51 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:37.566 04:16:51 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:37.566 04:16:51 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:19:37.566 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:37.566 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.328 ms 00:19:37.566 00:19:37.566 --- 10.0.0.2 ping statistics --- 00:19:37.566 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:37.566 rtt min/avg/max/mdev = 0.328/0.328/0.328/0.000 ms 00:19:37.566 04:16:51 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:37.566 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:37.566 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.272 ms 00:19:37.566 00:19:37.566 --- 10.0.0.1 ping statistics --- 00:19:37.566 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:37.566 rtt min/avg/max/mdev = 0.272/0.272/0.272/0.000 ms 00:19:37.566 04:16:51 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:37.566 04:16:51 -- nvmf/common.sh@410 -- # return 0 00:19:37.566 04:16:51 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:19:37.566 04:16:51 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:37.566 04:16:51 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:19:37.566 04:16:51 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:19:37.566 04:16:51 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:37.566 04:16:51 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:19:37.566 04:16:51 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:19:37.566 04:16:51 -- target/ns_hotplug_stress.sh@14 -- # nvmfappstart -m 0xE 00:19:37.566 04:16:51 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:19:37.566 04:16:51 -- common/autotest_common.sh@712 -- # xtrace_disable 00:19:37.566 04:16:51 -- common/autotest_common.sh@10 -- # set +x 00:19:37.566 04:16:51 -- nvmf/common.sh@469 -- # nvmfpid=4000799 00:19:37.566 04:16:51 -- nvmf/common.sh@470 -- # waitforlisten 4000799 00:19:37.566 04:16:51 -- common/autotest_common.sh@819 -- # '[' -z 4000799 ']' 00:19:37.566 04:16:51 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:37.566 04:16:51 -- common/autotest_common.sh@824 -- # local max_retries=100 00:19:37.566 04:16:51 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:37.566 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:37.566 04:16:51 -- common/autotest_common.sh@828 -- # xtrace_disable 00:19:37.566 04:16:51 -- common/autotest_common.sh@10 -- # set +x 00:19:37.566 04:16:51 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:19:37.567 [2024-05-14 04:16:51.564601] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:19:37.567 [2024-05-14 04:16:51.564730] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:37.567 EAL: No free 2048 kB hugepages reported on node 1 00:19:37.567 [2024-05-14 04:16:51.707836] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:19:37.567 [2024-05-14 04:16:51.807414] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:19:37.567 [2024-05-14 04:16:51.807620] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:37.567 [2024-05-14 04:16:51.807634] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:37.567 [2024-05-14 04:16:51.807645] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:37.567 [2024-05-14 04:16:51.807731] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:19:37.567 [2024-05-14 04:16:51.807836] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:19:37.567 [2024-05-14 04:16:51.807846] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:19:37.887 04:16:52 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:19:37.887 04:16:52 -- common/autotest_common.sh@852 -- # return 0 00:19:37.887 04:16:52 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:19:37.887 04:16:52 -- common/autotest_common.sh@718 -- # xtrace_disable 00:19:37.888 04:16:52 -- common/autotest_common.sh@10 -- # set +x 00:19:37.888 04:16:52 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:37.888 04:16:52 -- target/ns_hotplug_stress.sh@16 -- # null_size=1000 00:19:37.888 04:16:52 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:19:37.888 [2024-05-14 04:16:52.447372] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:38.148 04:16:52 -- target/ns_hotplug_stress.sh@20 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:19:38.148 04:16:52 -- target/ns_hotplug_stress.sh@21 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:38.409 [2024-05-14 04:16:52.780096] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:38.409 04:16:52 -- target/ns_hotplug_stress.sh@22 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:19:38.409 04:16:52 -- target/ns_hotplug_stress.sh@23 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:19:38.668 Malloc0 00:19:38.668 04:16:53 -- target/ns_hotplug_stress.sh@24 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:19:38.930 Delay0 00:19:38.930 04:16:53 -- target/ns_hotplug_stress.sh@25 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:19:38.930 04:16:53 -- target/ns_hotplug_stress.sh@26 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:19:39.191 NULL1 00:19:39.191 04:16:53 -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:19:39.191 04:16:53 -- target/ns_hotplug_stress.sh@33 -- # PERF_PID=4001141 00:19:39.191 04:16:53 -- target/ns_hotplug_stress.sh@35 -- # kill -0 4001141 00:19:39.191 04:16:53 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:19:39.191 04:16:53 -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:19:39.449 EAL: No free 2048 kB hugepages reported on node 1 00:19:39.449 04:16:53 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:19:39.449 04:16:54 -- target/ns_hotplug_stress.sh@40 -- # null_size=1001 00:19:39.449 04:16:54 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:19:39.707 [2024-05-14 04:16:54.162534] bdev.c:4963:_tmp_bdev_event_cb: *NOTICE*: Unexpected event type: 1 00:19:39.707 true 00:19:39.707 04:16:54 -- target/ns_hotplug_stress.sh@35 -- # kill -0 4001141 00:19:39.707 04:16:54 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:19:39.967 04:16:54 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:19:39.967 04:16:54 -- target/ns_hotplug_stress.sh@40 -- # null_size=1002 00:19:39.967 04:16:54 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:19:40.225 true 00:19:40.225 04:16:54 -- target/ns_hotplug_stress.sh@35 -- # kill -0 4001141 00:19:40.225 04:16:54 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:19:40.484 04:16:54 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:19:40.484 04:16:54 -- target/ns_hotplug_stress.sh@40 -- # null_size=1003 00:19:40.484 04:16:54 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:19:40.744 true 00:19:40.744 04:16:55 -- target/ns_hotplug_stress.sh@35 -- # kill -0 4001141 00:19:40.744 04:16:55 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:19:40.744 04:16:55 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:19:41.003 04:16:55 -- target/ns_hotplug_stress.sh@40 -- # null_size=1004 00:19:41.003 04:16:55 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:19:41.003 true 00:19:41.263 04:16:55 -- target/ns_hotplug_stress.sh@35 -- # kill -0 4001141 00:19:41.263 04:16:55 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:19:41.263 04:16:55 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:19:41.522 04:16:55 -- target/ns_hotplug_stress.sh@40 -- # null_size=1005 00:19:41.522 04:16:55 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:19:41.522 true 00:19:41.522 04:16:56 -- target/ns_hotplug_stress.sh@35 -- # kill -0 4001141 00:19:41.522 04:16:56 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:19:41.782 04:16:56 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:19:41.782 04:16:56 -- target/ns_hotplug_stress.sh@40 -- # null_size=1006 00:19:41.782 04:16:56 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:19:42.041 true 00:19:42.041 04:16:56 -- target/ns_hotplug_stress.sh@35 -- # kill -0 4001141 00:19:42.041 04:16:56 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:19:42.302 04:16:56 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:19:42.302 04:16:56 -- target/ns_hotplug_stress.sh@40 -- # null_size=1007 00:19:42.302 04:16:56 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:19:42.562 true 00:19:42.562 04:16:56 -- target/ns_hotplug_stress.sh@35 -- # kill -0 4001141 00:19:42.562 04:16:56 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:19:42.562 04:16:57 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:19:42.822 04:16:57 -- target/ns_hotplug_stress.sh@40 -- # null_size=1008 00:19:42.822 04:16:57 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:19:42.822 true 00:19:43.082 04:16:57 -- target/ns_hotplug_stress.sh@35 -- # kill -0 4001141 00:19:43.082 04:16:57 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:19:43.082 04:16:57 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:19:43.342 04:16:57 -- target/ns_hotplug_stress.sh@40 -- # null_size=1009 00:19:43.342 04:16:57 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:19:43.342 true 00:19:43.342 04:16:57 -- target/ns_hotplug_stress.sh@35 -- # kill -0 4001141 00:19:43.342 04:16:57 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:19:43.602 04:16:58 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:19:43.602 04:16:58 -- target/ns_hotplug_stress.sh@40 -- # null_size=1010 00:19:43.861 04:16:58 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:19:43.861 true 00:19:43.861 04:16:58 -- target/ns_hotplug_stress.sh@35 -- # kill -0 4001141 00:19:43.861 04:16:58 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:19:44.121 04:16:58 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:19:44.121 04:16:58 -- target/ns_hotplug_stress.sh@40 -- # null_size=1011 00:19:44.121 04:16:58 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:19:44.379 true 00:19:44.379 04:16:58 -- target/ns_hotplug_stress.sh@35 -- # kill -0 4001141 00:19:44.379 04:16:58 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:19:44.638 04:16:58 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:19:44.638 04:16:59 -- target/ns_hotplug_stress.sh@40 -- # null_size=1012 00:19:44.638 04:16:59 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:19:44.894 true 00:19:44.894 04:16:59 -- target/ns_hotplug_stress.sh@35 -- # kill -0 4001141 00:19:44.894 04:16:59 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:19:44.894 04:16:59 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:19:45.153 04:16:59 -- target/ns_hotplug_stress.sh@40 -- # null_size=1013 00:19:45.153 04:16:59 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:19:45.153 true 00:19:45.153 04:16:59 -- target/ns_hotplug_stress.sh@35 -- # kill -0 4001141 00:19:45.153 04:16:59 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:19:45.412 04:16:59 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:19:45.671 04:17:00 -- target/ns_hotplug_stress.sh@40 -- # null_size=1014 00:19:45.672 04:17:00 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:19:45.672 true 00:19:45.672 04:17:00 -- target/ns_hotplug_stress.sh@35 -- # kill -0 4001141 00:19:45.672 04:17:00 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:19:45.930 04:17:00 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:19:45.930 04:17:00 -- target/ns_hotplug_stress.sh@40 -- # null_size=1015 00:19:45.930 04:17:00 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:19:46.188 true 00:19:46.188 04:17:00 -- target/ns_hotplug_stress.sh@35 -- # kill -0 4001141 00:19:46.188 04:17:00 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:19:46.447 04:17:00 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:19:46.447 04:17:00 -- target/ns_hotplug_stress.sh@40 -- # null_size=1016 00:19:46.447 04:17:00 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:19:46.704 true 00:19:46.704 04:17:01 -- target/ns_hotplug_stress.sh@35 -- # kill -0 4001141 00:19:46.704 04:17:01 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:19:46.704 04:17:01 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:19:46.963 04:17:01 -- target/ns_hotplug_stress.sh@40 -- # null_size=1017 00:19:46.963 04:17:01 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:19:46.963 true 00:19:46.963 04:17:01 -- target/ns_hotplug_stress.sh@35 -- # kill -0 4001141 00:19:46.963 04:17:01 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:19:47.224 04:17:01 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:19:47.483 04:17:01 -- target/ns_hotplug_stress.sh@40 -- # null_size=1018 00:19:47.483 04:17:01 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:19:47.483 true 00:19:47.483 04:17:01 -- target/ns_hotplug_stress.sh@35 -- # kill -0 4001141 00:19:47.483 04:17:01 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:19:47.742 04:17:02 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:19:47.742 04:17:02 -- target/ns_hotplug_stress.sh@40 -- # null_size=1019 00:19:47.743 04:17:02 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:19:48.002 true 00:19:48.002 04:17:02 -- target/ns_hotplug_stress.sh@35 -- # kill -0 4001141 00:19:48.002 04:17:02 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:19:48.259 04:17:02 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:19:48.259 04:17:02 -- target/ns_hotplug_stress.sh@40 -- # null_size=1020 00:19:48.259 04:17:02 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:19:48.517 true 00:19:48.517 04:17:02 -- target/ns_hotplug_stress.sh@35 -- # kill -0 4001141 00:19:48.517 04:17:02 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:19:48.517 04:17:03 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:19:48.776 04:17:03 -- target/ns_hotplug_stress.sh@40 -- # null_size=1021 00:19:48.776 04:17:03 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:19:48.776 true 00:19:48.776 04:17:03 -- target/ns_hotplug_stress.sh@35 -- # kill -0 4001141 00:19:48.776 04:17:03 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:19:49.035 04:17:03 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:19:49.035 04:17:03 -- target/ns_hotplug_stress.sh@40 -- # null_size=1022 00:19:49.035 04:17:03 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:19:49.294 true 00:19:49.294 04:17:03 -- target/ns_hotplug_stress.sh@35 -- # kill -0 4001141 00:19:49.294 04:17:03 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:19:49.553 04:17:03 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:19:49.553 04:17:04 -- target/ns_hotplug_stress.sh@40 -- # null_size=1023 00:19:49.553 04:17:04 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:19:49.810 true 00:19:49.810 04:17:04 -- target/ns_hotplug_stress.sh@35 -- # kill -0 4001141 00:19:49.810 04:17:04 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:19:49.811 04:17:04 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:19:50.068 04:17:04 -- target/ns_hotplug_stress.sh@40 -- # null_size=1024 00:19:50.068 04:17:04 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:19:50.068 true 00:19:50.068 04:17:04 -- target/ns_hotplug_stress.sh@35 -- # kill -0 4001141 00:19:50.069 04:17:04 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:19:50.328 04:17:04 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:19:50.328 04:17:04 -- target/ns_hotplug_stress.sh@40 -- # null_size=1025 00:19:50.328 04:17:04 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:19:50.587 true 00:19:50.587 04:17:05 -- target/ns_hotplug_stress.sh@35 -- # kill -0 4001141 00:19:50.587 04:17:05 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:19:50.846 04:17:05 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:19:50.846 04:17:05 -- target/ns_hotplug_stress.sh@40 -- # null_size=1026 00:19:50.846 04:17:05 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:19:51.107 true 00:19:51.107 04:17:05 -- target/ns_hotplug_stress.sh@35 -- # kill -0 4001141 00:19:51.107 04:17:05 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:19:51.107 04:17:05 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:19:51.368 04:17:05 -- target/ns_hotplug_stress.sh@40 -- # null_size=1027 00:19:51.368 04:17:05 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:19:51.368 true 00:19:51.627 04:17:05 -- target/ns_hotplug_stress.sh@35 -- # kill -0 4001141 00:19:51.627 04:17:05 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:19:51.627 04:17:06 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:19:51.885 04:17:06 -- target/ns_hotplug_stress.sh@40 -- # null_size=1028 00:19:51.885 04:17:06 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:19:51.885 true 00:19:51.885 04:17:06 -- target/ns_hotplug_stress.sh@35 -- # kill -0 4001141 00:19:51.885 04:17:06 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:19:52.143 04:17:06 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:19:52.143 04:17:06 -- target/ns_hotplug_stress.sh@40 -- # null_size=1029 00:19:52.143 04:17:06 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:19:52.402 true 00:19:52.402 04:17:06 -- target/ns_hotplug_stress.sh@35 -- # kill -0 4001141 00:19:52.402 04:17:06 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:19:52.402 04:17:06 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:19:52.663 04:17:07 -- target/ns_hotplug_stress.sh@40 -- # null_size=1030 00:19:52.663 04:17:07 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1030 00:19:52.923 true 00:19:52.923 04:17:07 -- target/ns_hotplug_stress.sh@35 -- # kill -0 4001141 00:19:52.923 04:17:07 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:19:52.923 04:17:07 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:19:53.183 04:17:07 -- target/ns_hotplug_stress.sh@40 -- # null_size=1031 00:19:53.183 04:17:07 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1031 00:19:53.183 true 00:19:53.183 04:17:07 -- target/ns_hotplug_stress.sh@35 -- # kill -0 4001141 00:19:53.183 04:17:07 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:19:53.442 04:17:07 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:19:53.713 04:17:08 -- target/ns_hotplug_stress.sh@40 -- # null_size=1032 00:19:53.713 04:17:08 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1032 00:19:53.713 true 00:19:53.713 04:17:08 -- target/ns_hotplug_stress.sh@35 -- # kill -0 4001141 00:19:53.713 04:17:08 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:19:54.045 04:17:08 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:19:54.045 04:17:08 -- target/ns_hotplug_stress.sh@40 -- # null_size=1033 00:19:54.045 04:17:08 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1033 00:19:54.305 true 00:19:54.305 04:17:08 -- target/ns_hotplug_stress.sh@35 -- # kill -0 4001141 00:19:54.305 04:17:08 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:19:54.305 04:17:08 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:19:54.565 04:17:08 -- target/ns_hotplug_stress.sh@40 -- # null_size=1034 00:19:54.565 04:17:08 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1034 00:19:54.565 true 00:19:54.565 04:17:09 -- target/ns_hotplug_stress.sh@35 -- # kill -0 4001141 00:19:54.565 04:17:09 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:19:54.825 04:17:09 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:19:55.084 04:17:09 -- target/ns_hotplug_stress.sh@40 -- # null_size=1035 00:19:55.084 04:17:09 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1035 00:19:55.084 true 00:19:55.084 04:17:09 -- target/ns_hotplug_stress.sh@35 -- # kill -0 4001141 00:19:55.084 04:17:09 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:19:55.341 04:17:09 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:19:55.341 04:17:09 -- target/ns_hotplug_stress.sh@40 -- # null_size=1036 00:19:55.341 04:17:09 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1036 00:19:55.599 true 00:19:55.599 04:17:10 -- target/ns_hotplug_stress.sh@35 -- # kill -0 4001141 00:19:55.599 04:17:10 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:19:55.599 04:17:10 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:19:55.857 04:17:10 -- target/ns_hotplug_stress.sh@40 -- # null_size=1037 00:19:55.857 04:17:10 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1037 00:19:55.857 true 00:19:56.115 04:17:10 -- target/ns_hotplug_stress.sh@35 -- # kill -0 4001141 00:19:56.115 04:17:10 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:19:56.115 04:17:10 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:19:56.373 04:17:10 -- target/ns_hotplug_stress.sh@40 -- # null_size=1038 00:19:56.373 04:17:10 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1038 00:19:56.373 true 00:19:56.373 04:17:10 -- target/ns_hotplug_stress.sh@35 -- # kill -0 4001141 00:19:56.373 04:17:10 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:19:56.632 04:17:10 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:19:56.632 04:17:11 -- target/ns_hotplug_stress.sh@40 -- # null_size=1039 00:19:56.632 04:17:11 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1039 00:19:56.891 true 00:19:56.891 04:17:11 -- target/ns_hotplug_stress.sh@35 -- # kill -0 4001141 00:19:56.891 04:17:11 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:19:56.891 04:17:11 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:19:57.149 04:17:11 -- target/ns_hotplug_stress.sh@40 -- # null_size=1040 00:19:57.149 04:17:11 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1040 00:19:57.149 true 00:19:57.149 04:17:11 -- target/ns_hotplug_stress.sh@35 -- # kill -0 4001141 00:19:57.149 04:17:11 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:19:57.407 04:17:11 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:19:57.407 04:17:11 -- target/ns_hotplug_stress.sh@40 -- # null_size=1041 00:19:57.407 04:17:11 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1041 00:19:57.666 true 00:19:57.666 04:17:12 -- target/ns_hotplug_stress.sh@35 -- # kill -0 4001141 00:19:57.666 04:17:12 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:19:57.666 04:17:12 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:19:57.924 04:17:12 -- target/ns_hotplug_stress.sh@40 -- # null_size=1042 00:19:57.924 04:17:12 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1042 00:19:57.924 true 00:19:57.924 04:17:12 -- target/ns_hotplug_stress.sh@35 -- # kill -0 4001141 00:19:57.924 04:17:12 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:19:58.181 04:17:12 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:19:58.181 04:17:12 -- target/ns_hotplug_stress.sh@40 -- # null_size=1043 00:19:58.181 04:17:12 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1043 00:19:58.439 true 00:19:58.439 04:17:12 -- target/ns_hotplug_stress.sh@35 -- # kill -0 4001141 00:19:58.439 04:17:12 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:19:58.696 04:17:13 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:19:58.696 04:17:13 -- target/ns_hotplug_stress.sh@40 -- # null_size=1044 00:19:58.696 04:17:13 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1044 00:19:58.954 true 00:19:58.954 04:17:13 -- target/ns_hotplug_stress.sh@35 -- # kill -0 4001141 00:19:58.954 04:17:13 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:19:58.954 04:17:13 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:19:59.213 04:17:13 -- target/ns_hotplug_stress.sh@40 -- # null_size=1045 00:19:59.213 04:17:13 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1045 00:19:59.213 true 00:19:59.213 04:17:13 -- target/ns_hotplug_stress.sh@35 -- # kill -0 4001141 00:19:59.213 04:17:13 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:19:59.470 04:17:13 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:19:59.470 04:17:13 -- target/ns_hotplug_stress.sh@40 -- # null_size=1046 00:19:59.470 04:17:13 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1046 00:19:59.730 true 00:19:59.730 04:17:14 -- target/ns_hotplug_stress.sh@35 -- # kill -0 4001141 00:19:59.730 04:17:14 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:19:59.730 04:17:14 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:19:59.989 04:17:14 -- target/ns_hotplug_stress.sh@40 -- # null_size=1047 00:19:59.989 04:17:14 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1047 00:19:59.989 true 00:19:59.989 04:17:14 -- target/ns_hotplug_stress.sh@35 -- # kill -0 4001141 00:19:59.989 04:17:14 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:20:00.248 04:17:14 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:20:00.506 04:17:14 -- target/ns_hotplug_stress.sh@40 -- # null_size=1048 00:20:00.506 04:17:14 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1048 00:20:00.506 true 00:20:00.506 04:17:14 -- target/ns_hotplug_stress.sh@35 -- # kill -0 4001141 00:20:00.506 04:17:14 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:20:00.764 04:17:15 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:20:00.764 04:17:15 -- target/ns_hotplug_stress.sh@40 -- # null_size=1049 00:20:00.764 04:17:15 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1049 00:20:01.021 true 00:20:01.021 04:17:15 -- target/ns_hotplug_stress.sh@35 -- # kill -0 4001141 00:20:01.021 04:17:15 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:20:01.021 04:17:15 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:20:01.281 04:17:15 -- target/ns_hotplug_stress.sh@40 -- # null_size=1050 00:20:01.281 04:17:15 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1050 00:20:01.281 true 00:20:01.281 04:17:15 -- target/ns_hotplug_stress.sh@35 -- # kill -0 4001141 00:20:01.281 04:17:15 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:20:01.540 04:17:15 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:20:01.540 04:17:16 -- target/ns_hotplug_stress.sh@40 -- # null_size=1051 00:20:01.540 04:17:16 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1051 00:20:01.797 true 00:20:01.797 04:17:16 -- target/ns_hotplug_stress.sh@35 -- # kill -0 4001141 00:20:01.797 04:17:16 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:20:01.797 04:17:16 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:20:02.054 04:17:16 -- target/ns_hotplug_stress.sh@40 -- # null_size=1052 00:20:02.054 04:17:16 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1052 00:20:02.054 true 00:20:02.054 04:17:16 -- target/ns_hotplug_stress.sh@35 -- # kill -0 4001141 00:20:02.054 04:17:16 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:20:02.312 04:17:16 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:20:02.312 04:17:16 -- target/ns_hotplug_stress.sh@40 -- # null_size=1053 00:20:02.312 04:17:16 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1053 00:20:02.569 true 00:20:02.569 04:17:16 -- target/ns_hotplug_stress.sh@35 -- # kill -0 4001141 00:20:02.569 04:17:16 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:20:02.569 04:17:17 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:20:02.827 04:17:17 -- target/ns_hotplug_stress.sh@40 -- # null_size=1054 00:20:02.827 04:17:17 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1054 00:20:03.086 true 00:20:03.086 04:17:17 -- target/ns_hotplug_stress.sh@35 -- # kill -0 4001141 00:20:03.086 04:17:17 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:20:03.086 04:17:17 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:20:03.346 04:17:17 -- target/ns_hotplug_stress.sh@40 -- # null_size=1055 00:20:03.346 04:17:17 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1055 00:20:03.346 true 00:20:03.346 04:17:17 -- target/ns_hotplug_stress.sh@35 -- # kill -0 4001141 00:20:03.346 04:17:17 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:20:03.604 04:17:18 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:20:03.604 04:17:18 -- target/ns_hotplug_stress.sh@40 -- # null_size=1056 00:20:03.604 04:17:18 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1056 00:20:03.864 true 00:20:03.864 04:17:18 -- target/ns_hotplug_stress.sh@35 -- # kill -0 4001141 00:20:03.864 04:17:18 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:20:03.864 04:17:18 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:20:04.121 04:17:18 -- target/ns_hotplug_stress.sh@40 -- # null_size=1057 00:20:04.121 04:17:18 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1057 00:20:04.379 true 00:20:04.379 04:17:18 -- target/ns_hotplug_stress.sh@35 -- # kill -0 4001141 00:20:04.379 04:17:18 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:20:04.379 04:17:18 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:20:04.637 04:17:19 -- target/ns_hotplug_stress.sh@40 -- # null_size=1058 00:20:04.637 04:17:19 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1058 00:20:04.637 true 00:20:04.637 04:17:19 -- target/ns_hotplug_stress.sh@35 -- # kill -0 4001141 00:20:04.637 04:17:19 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:20:04.894 04:17:19 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:20:04.894 04:17:19 -- target/ns_hotplug_stress.sh@40 -- # null_size=1059 00:20:04.894 04:17:19 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1059 00:20:05.151 true 00:20:05.151 04:17:19 -- target/ns_hotplug_stress.sh@35 -- # kill -0 4001141 00:20:05.151 04:17:19 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:20:05.151 04:17:19 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:20:05.410 04:17:19 -- target/ns_hotplug_stress.sh@40 -- # null_size=1060 00:20:05.410 04:17:19 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1060 00:20:05.410 true 00:20:05.668 04:17:20 -- target/ns_hotplug_stress.sh@35 -- # kill -0 4001141 00:20:05.669 04:17:20 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:20:05.669 04:17:20 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:20:05.927 04:17:20 -- target/ns_hotplug_stress.sh@40 -- # null_size=1061 00:20:05.927 04:17:20 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1061 00:20:05.927 true 00:20:05.927 04:17:20 -- target/ns_hotplug_stress.sh@35 -- # kill -0 4001141 00:20:05.927 04:17:20 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:20:06.186 04:17:20 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:20:06.444 04:17:20 -- target/ns_hotplug_stress.sh@40 -- # null_size=1062 00:20:06.444 04:17:20 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1062 00:20:06.444 true 00:20:06.444 04:17:20 -- target/ns_hotplug_stress.sh@35 -- # kill -0 4001141 00:20:06.444 04:17:20 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:20:06.703 04:17:21 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:20:06.703 04:17:21 -- target/ns_hotplug_stress.sh@40 -- # null_size=1063 00:20:06.703 04:17:21 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1063 00:20:06.961 true 00:20:06.961 04:17:21 -- target/ns_hotplug_stress.sh@35 -- # kill -0 4001141 00:20:06.961 04:17:21 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:20:06.961 04:17:21 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:20:07.220 04:17:21 -- target/ns_hotplug_stress.sh@40 -- # null_size=1064 00:20:07.221 04:17:21 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1064 00:20:07.221 true 00:20:07.221 04:17:21 -- target/ns_hotplug_stress.sh@35 -- # kill -0 4001141 00:20:07.221 04:17:21 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:20:07.480 04:17:21 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:20:07.480 04:17:22 -- target/ns_hotplug_stress.sh@40 -- # null_size=1065 00:20:07.480 04:17:22 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1065 00:20:07.739 true 00:20:07.739 04:17:22 -- target/ns_hotplug_stress.sh@35 -- # kill -0 4001141 00:20:07.739 04:17:22 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:20:07.996 04:17:22 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:20:07.996 04:17:22 -- target/ns_hotplug_stress.sh@40 -- # null_size=1066 00:20:07.996 04:17:22 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1066 00:20:08.254 true 00:20:08.254 04:17:22 -- target/ns_hotplug_stress.sh@35 -- # kill -0 4001141 00:20:08.254 04:17:22 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:20:08.254 04:17:22 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:20:08.511 04:17:22 -- target/ns_hotplug_stress.sh@40 -- # null_size=1067 00:20:08.511 04:17:22 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1067 00:20:08.511 true 00:20:08.511 04:17:23 -- target/ns_hotplug_stress.sh@35 -- # kill -0 4001141 00:20:08.511 04:17:23 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:20:08.770 04:17:23 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:20:08.770 04:17:23 -- target/ns_hotplug_stress.sh@40 -- # null_size=1068 00:20:08.770 04:17:23 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1068 00:20:09.028 true 00:20:09.028 04:17:23 -- target/ns_hotplug_stress.sh@35 -- # kill -0 4001141 00:20:09.028 04:17:23 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:20:09.028 04:17:23 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:20:09.295 04:17:23 -- target/ns_hotplug_stress.sh@40 -- # null_size=1069 00:20:09.296 04:17:23 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1069 00:20:09.628 true 00:20:09.628 04:17:23 -- target/ns_hotplug_stress.sh@35 -- # kill -0 4001141 00:20:09.628 04:17:23 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:20:09.628 Initializing NVMe Controllers 00:20:09.628 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:09.629 Controller IO queue size 128, less than required. 00:20:09.629 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:20:09.629 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:20:09.629 Initialization complete. Launching workers. 00:20:09.629 ======================================================== 00:20:09.629 Latency(us) 00:20:09.629 Device Information : IOPS MiB/s Average min max 00:20:09.629 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 30732.12 15.01 4165.02 1881.59 10313.38 00:20:09.629 ======================================================== 00:20:09.629 Total : 30732.12 15.01 4165.02 1881.59 10313.38 00:20:09.629 00:20:09.629 04:17:24 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:20:09.629 04:17:24 -- target/ns_hotplug_stress.sh@40 -- # null_size=1070 00:20:09.629 04:17:24 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1070 00:20:09.887 true 00:20:09.887 04:17:24 -- target/ns_hotplug_stress.sh@35 -- # kill -0 4001141 00:20:09.887 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 35: kill: (4001141) - No such process 00:20:09.887 04:17:24 -- target/ns_hotplug_stress.sh@44 -- # wait 4001141 00:20:09.887 04:17:24 -- target/ns_hotplug_stress.sh@46 -- # trap - SIGINT SIGTERM EXIT 00:20:09.887 04:17:24 -- target/ns_hotplug_stress.sh@48 -- # nvmftestfini 00:20:09.887 04:17:24 -- nvmf/common.sh@476 -- # nvmfcleanup 00:20:09.887 04:17:24 -- nvmf/common.sh@116 -- # sync 00:20:09.887 04:17:24 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:20:09.887 04:17:24 -- nvmf/common.sh@119 -- # set +e 00:20:09.887 04:17:24 -- nvmf/common.sh@120 -- # for i in {1..20} 00:20:09.887 04:17:24 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:20:09.887 rmmod nvme_tcp 00:20:09.887 rmmod nvme_fabrics 00:20:09.887 rmmod nvme_keyring 00:20:09.887 04:17:24 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:20:09.887 04:17:24 -- nvmf/common.sh@123 -- # set -e 00:20:09.887 04:17:24 -- nvmf/common.sh@124 -- # return 0 00:20:09.887 04:17:24 -- nvmf/common.sh@477 -- # '[' -n 4000799 ']' 00:20:09.887 04:17:24 -- nvmf/common.sh@478 -- # killprocess 4000799 00:20:09.887 04:17:24 -- common/autotest_common.sh@926 -- # '[' -z 4000799 ']' 00:20:09.887 04:17:24 -- common/autotest_common.sh@930 -- # kill -0 4000799 00:20:09.887 04:17:24 -- common/autotest_common.sh@931 -- # uname 00:20:09.887 04:17:24 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:20:09.887 04:17:24 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 4000799 00:20:09.887 04:17:24 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:20:09.887 04:17:24 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:20:09.887 04:17:24 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 4000799' 00:20:09.887 killing process with pid 4000799 00:20:09.887 04:17:24 -- common/autotest_common.sh@945 -- # kill 4000799 00:20:09.887 04:17:24 -- common/autotest_common.sh@950 -- # wait 4000799 00:20:10.453 04:17:24 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:20:10.453 04:17:24 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:20:10.453 04:17:24 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:20:10.453 04:17:24 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:10.453 04:17:24 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:20:10.453 04:17:24 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:10.453 04:17:24 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:10.453 04:17:24 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:12.980 04:17:26 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:20:12.980 00:20:12.980 real 0m41.638s 00:20:12.980 user 2m34.675s 00:20:12.980 sys 0m11.792s 00:20:12.980 04:17:26 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:12.980 04:17:26 -- common/autotest_common.sh@10 -- # set +x 00:20:12.980 ************************************ 00:20:12.980 END TEST nvmf_ns_hotplug_stress 00:20:12.980 ************************************ 00:20:12.980 04:17:27 -- nvmf/nvmf.sh@33 -- # run_test nvmf_connect_stress /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:20:12.980 04:17:27 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:20:12.980 04:17:27 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:20:12.980 04:17:27 -- common/autotest_common.sh@10 -- # set +x 00:20:12.980 ************************************ 00:20:12.980 START TEST nvmf_connect_stress 00:20:12.980 ************************************ 00:20:12.980 04:17:27 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:20:12.980 * Looking for test storage... 00:20:12.980 * Found test storage at /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target 00:20:12.980 04:17:27 -- target/connect_stress.sh@10 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/common.sh 00:20:12.980 04:17:27 -- nvmf/common.sh@7 -- # uname -s 00:20:12.980 04:17:27 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:12.980 04:17:27 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:12.980 04:17:27 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:12.980 04:17:27 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:12.980 04:17:27 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:12.980 04:17:27 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:12.980 04:17:27 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:12.980 04:17:27 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:12.980 04:17:27 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:12.980 04:17:27 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:12.980 04:17:27 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80ef6226-405e-ee11-906e-a4bf01973fda 00:20:12.980 04:17:27 -- nvmf/common.sh@18 -- # NVME_HOSTID=80ef6226-405e-ee11-906e-a4bf01973fda 00:20:12.980 04:17:27 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:12.980 04:17:27 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:12.980 04:17:27 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:20:12.980 04:17:27 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/common.sh 00:20:12.980 04:17:27 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:12.980 04:17:27 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:12.980 04:17:27 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:12.980 04:17:27 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:12.980 04:17:27 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:12.981 04:17:27 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:12.981 04:17:27 -- paths/export.sh@5 -- # export PATH 00:20:12.981 04:17:27 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:12.981 04:17:27 -- nvmf/common.sh@46 -- # : 0 00:20:12.981 04:17:27 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:20:12.981 04:17:27 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:20:12.981 04:17:27 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:20:12.981 04:17:27 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:12.981 04:17:27 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:12.981 04:17:27 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:20:12.981 04:17:27 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:20:12.981 04:17:27 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:20:12.981 04:17:27 -- target/connect_stress.sh@12 -- # nvmftestinit 00:20:12.981 04:17:27 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:20:12.981 04:17:27 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:12.981 04:17:27 -- nvmf/common.sh@436 -- # prepare_net_devs 00:20:12.981 04:17:27 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:20:12.981 04:17:27 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:20:12.981 04:17:27 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:12.981 04:17:27 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:12.981 04:17:27 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:12.981 04:17:27 -- nvmf/common.sh@402 -- # [[ phy-fallback != virt ]] 00:20:12.981 04:17:27 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:20:12.981 04:17:27 -- nvmf/common.sh@284 -- # xtrace_disable 00:20:12.981 04:17:27 -- common/autotest_common.sh@10 -- # set +x 00:20:19.560 04:17:33 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:20:19.560 04:17:33 -- nvmf/common.sh@290 -- # pci_devs=() 00:20:19.560 04:17:33 -- nvmf/common.sh@290 -- # local -a pci_devs 00:20:19.560 04:17:33 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:20:19.560 04:17:33 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:20:19.560 04:17:33 -- nvmf/common.sh@292 -- # pci_drivers=() 00:20:19.560 04:17:33 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:20:19.560 04:17:33 -- nvmf/common.sh@294 -- # net_devs=() 00:20:19.560 04:17:33 -- nvmf/common.sh@294 -- # local -ga net_devs 00:20:19.560 04:17:33 -- nvmf/common.sh@295 -- # e810=() 00:20:19.560 04:17:33 -- nvmf/common.sh@295 -- # local -ga e810 00:20:19.560 04:17:33 -- nvmf/common.sh@296 -- # x722=() 00:20:19.560 04:17:33 -- nvmf/common.sh@296 -- # local -ga x722 00:20:19.560 04:17:33 -- nvmf/common.sh@297 -- # mlx=() 00:20:19.560 04:17:33 -- nvmf/common.sh@297 -- # local -ga mlx 00:20:19.560 04:17:33 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:19.560 04:17:33 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:19.560 04:17:33 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:19.560 04:17:33 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:19.560 04:17:33 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:19.560 04:17:33 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:19.560 04:17:33 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:19.560 04:17:33 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:19.560 04:17:33 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:19.560 04:17:33 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:19.560 04:17:33 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:19.560 04:17:33 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:20:19.560 04:17:33 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:20:19.560 04:17:33 -- nvmf/common.sh@326 -- # [[ '' == mlx5 ]] 00:20:19.560 04:17:33 -- nvmf/common.sh@328 -- # [[ '' == e810 ]] 00:20:19.560 04:17:33 -- nvmf/common.sh@330 -- # [[ '' == x722 ]] 00:20:19.560 04:17:33 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:20:19.560 04:17:33 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:20:19.560 04:17:33 -- nvmf/common.sh@340 -- # echo 'Found 0000:27:00.0 (0x8086 - 0x159b)' 00:20:19.560 Found 0000:27:00.0 (0x8086 - 0x159b) 00:20:19.560 04:17:33 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:20:19.560 04:17:33 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:20:19.560 04:17:33 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:19.560 04:17:33 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:19.560 04:17:33 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:20:19.560 04:17:33 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:20:19.560 04:17:33 -- nvmf/common.sh@340 -- # echo 'Found 0000:27:00.1 (0x8086 - 0x159b)' 00:20:19.560 Found 0000:27:00.1 (0x8086 - 0x159b) 00:20:19.560 04:17:33 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:20:19.560 04:17:33 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:20:19.560 04:17:33 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:19.560 04:17:33 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:19.560 04:17:33 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:20:19.560 04:17:33 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:20:19.560 04:17:33 -- nvmf/common.sh@371 -- # [[ '' == e810 ]] 00:20:19.560 04:17:33 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:20:19.560 04:17:33 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:19.560 04:17:33 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:20:19.560 04:17:33 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:19.560 04:17:33 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:27:00.0: cvl_0_0' 00:20:19.560 Found net devices under 0000:27:00.0: cvl_0_0 00:20:19.560 04:17:33 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:20:19.560 04:17:33 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:20:19.560 04:17:33 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:19.560 04:17:33 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:20:19.560 04:17:33 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:19.560 04:17:33 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:27:00.1: cvl_0_1' 00:20:19.560 Found net devices under 0000:27:00.1: cvl_0_1 00:20:19.560 04:17:33 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:20:19.560 04:17:33 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:20:19.560 04:17:33 -- nvmf/common.sh@402 -- # is_hw=yes 00:20:19.560 04:17:33 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:20:19.560 04:17:33 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:20:19.560 04:17:33 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:20:19.560 04:17:33 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:19.560 04:17:33 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:19.560 04:17:33 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:19.560 04:17:33 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:20:19.560 04:17:33 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:19.560 04:17:33 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:19.560 04:17:33 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:20:19.560 04:17:33 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:19.560 04:17:33 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:19.560 04:17:33 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:20:19.560 04:17:33 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:20:19.560 04:17:33 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:20:19.560 04:17:33 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:19.560 04:17:33 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:19.560 04:17:33 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:19.560 04:17:33 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:20:19.560 04:17:33 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:19.560 04:17:33 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:19.560 04:17:33 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:19.560 04:17:33 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:20:19.560 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:19.560 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.259 ms 00:20:19.560 00:20:19.560 --- 10.0.0.2 ping statistics --- 00:20:19.560 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:19.560 rtt min/avg/max/mdev = 0.259/0.259/0.259/0.000 ms 00:20:19.560 04:17:33 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:19.560 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:19.560 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.412 ms 00:20:19.560 00:20:19.560 --- 10.0.0.1 ping statistics --- 00:20:19.560 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:19.560 rtt min/avg/max/mdev = 0.412/0.412/0.412/0.000 ms 00:20:19.560 04:17:33 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:19.560 04:17:33 -- nvmf/common.sh@410 -- # return 0 00:20:19.560 04:17:33 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:20:19.560 04:17:33 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:19.560 04:17:33 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:20:19.560 04:17:33 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:20:19.560 04:17:33 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:19.560 04:17:33 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:20:19.560 04:17:33 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:20:19.560 04:17:33 -- target/connect_stress.sh@13 -- # nvmfappstart -m 0xE 00:20:19.560 04:17:33 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:20:19.560 04:17:33 -- common/autotest_common.sh@712 -- # xtrace_disable 00:20:19.560 04:17:33 -- common/autotest_common.sh@10 -- # set +x 00:20:19.560 04:17:33 -- nvmf/common.sh@469 -- # nvmfpid=4011566 00:20:19.560 04:17:33 -- nvmf/common.sh@470 -- # waitforlisten 4011566 00:20:19.560 04:17:33 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:20:19.560 04:17:33 -- common/autotest_common.sh@819 -- # '[' -z 4011566 ']' 00:20:19.560 04:17:33 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:19.560 04:17:33 -- common/autotest_common.sh@824 -- # local max_retries=100 00:20:19.560 04:17:33 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:19.560 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:19.560 04:17:33 -- common/autotest_common.sh@828 -- # xtrace_disable 00:20:19.560 04:17:33 -- common/autotest_common.sh@10 -- # set +x 00:20:19.560 [2024-05-14 04:17:33.801043] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:20:19.560 [2024-05-14 04:17:33.801177] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:19.560 EAL: No free 2048 kB hugepages reported on node 1 00:20:19.561 [2024-05-14 04:17:33.939832] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:20:19.561 [2024-05-14 04:17:34.043010] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:20:19.561 [2024-05-14 04:17:34.043228] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:19.561 [2024-05-14 04:17:34.043244] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:19.561 [2024-05-14 04:17:34.043255] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:19.561 [2024-05-14 04:17:34.043342] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:20:19.561 [2024-05-14 04:17:34.043442] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:20:19.561 [2024-05-14 04:17:34.043453] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:20:20.133 04:17:34 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:20:20.133 04:17:34 -- common/autotest_common.sh@852 -- # return 0 00:20:20.133 04:17:34 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:20:20.133 04:17:34 -- common/autotest_common.sh@718 -- # xtrace_disable 00:20:20.133 04:17:34 -- common/autotest_common.sh@10 -- # set +x 00:20:20.133 04:17:34 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:20.133 04:17:34 -- target/connect_stress.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:20:20.133 04:17:34 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:20.133 04:17:34 -- common/autotest_common.sh@10 -- # set +x 00:20:20.133 [2024-05-14 04:17:34.559799] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:20.133 04:17:34 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:20.133 04:17:34 -- target/connect_stress.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:20:20.133 04:17:34 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:20.133 04:17:34 -- common/autotest_common.sh@10 -- # set +x 00:20:20.133 04:17:34 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:20.133 04:17:34 -- target/connect_stress.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:20.133 04:17:34 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:20.133 04:17:34 -- common/autotest_common.sh@10 -- # set +x 00:20:20.133 [2024-05-14 04:17:34.592015] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:20.133 04:17:34 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:20.133 04:17:34 -- target/connect_stress.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:20:20.133 04:17:34 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:20.133 04:17:34 -- common/autotest_common.sh@10 -- # set +x 00:20:20.133 NULL1 00:20:20.133 04:17:34 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:20.133 04:17:34 -- target/connect_stress.sh@21 -- # PERF_PID=4011885 00:20:20.133 04:17:34 -- target/connect_stress.sh@23 -- # rpcs=/var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:20:20.133 04:17:34 -- target/connect_stress.sh@25 -- # rm -f /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:20:20.133 04:17:34 -- target/connect_stress.sh@20 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvme/connect_stress/connect_stress -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -t 10 00:20:20.133 04:17:34 -- target/connect_stress.sh@27 -- # seq 1 20 00:20:20.133 04:17:34 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:20:20.133 04:17:34 -- target/connect_stress.sh@28 -- # cat 00:20:20.133 04:17:34 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:20:20.133 04:17:34 -- target/connect_stress.sh@28 -- # cat 00:20:20.133 04:17:34 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:20:20.133 04:17:34 -- target/connect_stress.sh@28 -- # cat 00:20:20.133 04:17:34 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:20:20.133 04:17:34 -- target/connect_stress.sh@28 -- # cat 00:20:20.133 04:17:34 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:20:20.133 04:17:34 -- target/connect_stress.sh@28 -- # cat 00:20:20.133 04:17:34 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:20:20.133 04:17:34 -- target/connect_stress.sh@28 -- # cat 00:20:20.133 04:17:34 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:20:20.133 04:17:34 -- target/connect_stress.sh@28 -- # cat 00:20:20.133 04:17:34 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:20:20.133 04:17:34 -- target/connect_stress.sh@28 -- # cat 00:20:20.133 04:17:34 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:20:20.133 04:17:34 -- target/connect_stress.sh@28 -- # cat 00:20:20.133 04:17:34 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:20:20.133 04:17:34 -- target/connect_stress.sh@28 -- # cat 00:20:20.133 04:17:34 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:20:20.133 04:17:34 -- target/connect_stress.sh@28 -- # cat 00:20:20.133 04:17:34 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:20:20.133 04:17:34 -- target/connect_stress.sh@28 -- # cat 00:20:20.133 04:17:34 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:20:20.133 04:17:34 -- target/connect_stress.sh@28 -- # cat 00:20:20.133 04:17:34 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:20:20.133 04:17:34 -- target/connect_stress.sh@28 -- # cat 00:20:20.133 04:17:34 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:20:20.133 04:17:34 -- target/connect_stress.sh@28 -- # cat 00:20:20.133 04:17:34 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:20:20.133 04:17:34 -- target/connect_stress.sh@28 -- # cat 00:20:20.133 04:17:34 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:20:20.133 04:17:34 -- target/connect_stress.sh@28 -- # cat 00:20:20.133 EAL: No free 2048 kB hugepages reported on node 1 00:20:20.133 04:17:34 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:20:20.133 04:17:34 -- target/connect_stress.sh@28 -- # cat 00:20:20.133 04:17:34 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:20:20.133 04:17:34 -- target/connect_stress.sh@28 -- # cat 00:20:20.133 04:17:34 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:20:20.133 04:17:34 -- target/connect_stress.sh@28 -- # cat 00:20:20.133 04:17:34 -- target/connect_stress.sh@34 -- # kill -0 4011885 00:20:20.133 04:17:34 -- target/connect_stress.sh@35 -- # rpc_cmd 00:20:20.133 04:17:34 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:20.133 04:17:34 -- common/autotest_common.sh@10 -- # set +x 00:20:20.705 04:17:35 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:20.706 04:17:35 -- target/connect_stress.sh@34 -- # kill -0 4011885 00:20:20.706 04:17:35 -- target/connect_stress.sh@35 -- # rpc_cmd 00:20:20.706 04:17:35 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:20.706 04:17:35 -- common/autotest_common.sh@10 -- # set +x 00:20:20.967 04:17:35 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:20.967 04:17:35 -- target/connect_stress.sh@34 -- # kill -0 4011885 00:20:20.967 04:17:35 -- target/connect_stress.sh@35 -- # rpc_cmd 00:20:20.967 04:17:35 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:20.967 04:17:35 -- common/autotest_common.sh@10 -- # set +x 00:20:21.228 04:17:35 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:21.228 04:17:35 -- target/connect_stress.sh@34 -- # kill -0 4011885 00:20:21.228 04:17:35 -- target/connect_stress.sh@35 -- # rpc_cmd 00:20:21.228 04:17:35 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:21.228 04:17:35 -- common/autotest_common.sh@10 -- # set +x 00:20:21.487 04:17:35 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:21.487 04:17:35 -- target/connect_stress.sh@34 -- # kill -0 4011885 00:20:21.487 04:17:35 -- target/connect_stress.sh@35 -- # rpc_cmd 00:20:21.487 04:17:35 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:21.487 04:17:35 -- common/autotest_common.sh@10 -- # set +x 00:20:21.748 04:17:36 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:21.748 04:17:36 -- target/connect_stress.sh@34 -- # kill -0 4011885 00:20:21.748 04:17:36 -- target/connect_stress.sh@35 -- # rpc_cmd 00:20:21.748 04:17:36 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:21.748 04:17:36 -- common/autotest_common.sh@10 -- # set +x 00:20:22.320 04:17:36 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:22.320 04:17:36 -- target/connect_stress.sh@34 -- # kill -0 4011885 00:20:22.320 04:17:36 -- target/connect_stress.sh@35 -- # rpc_cmd 00:20:22.320 04:17:36 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:22.320 04:17:36 -- common/autotest_common.sh@10 -- # set +x 00:20:22.583 04:17:36 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:22.583 04:17:36 -- target/connect_stress.sh@34 -- # kill -0 4011885 00:20:22.583 04:17:36 -- target/connect_stress.sh@35 -- # rpc_cmd 00:20:22.583 04:17:36 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:22.583 04:17:36 -- common/autotest_common.sh@10 -- # set +x 00:20:22.843 04:17:37 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:22.843 04:17:37 -- target/connect_stress.sh@34 -- # kill -0 4011885 00:20:22.843 04:17:37 -- target/connect_stress.sh@35 -- # rpc_cmd 00:20:22.843 04:17:37 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:22.843 04:17:37 -- common/autotest_common.sh@10 -- # set +x 00:20:23.103 04:17:37 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:23.103 04:17:37 -- target/connect_stress.sh@34 -- # kill -0 4011885 00:20:23.103 04:17:37 -- target/connect_stress.sh@35 -- # rpc_cmd 00:20:23.103 04:17:37 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:23.103 04:17:37 -- common/autotest_common.sh@10 -- # set +x 00:20:23.364 04:17:37 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:23.364 04:17:37 -- target/connect_stress.sh@34 -- # kill -0 4011885 00:20:23.364 04:17:37 -- target/connect_stress.sh@35 -- # rpc_cmd 00:20:23.364 04:17:37 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:23.364 04:17:37 -- common/autotest_common.sh@10 -- # set +x 00:20:23.936 04:17:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:23.936 04:17:38 -- target/connect_stress.sh@34 -- # kill -0 4011885 00:20:23.936 04:17:38 -- target/connect_stress.sh@35 -- # rpc_cmd 00:20:23.936 04:17:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:23.936 04:17:38 -- common/autotest_common.sh@10 -- # set +x 00:20:24.197 04:17:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:24.198 04:17:38 -- target/connect_stress.sh@34 -- # kill -0 4011885 00:20:24.198 04:17:38 -- target/connect_stress.sh@35 -- # rpc_cmd 00:20:24.198 04:17:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:24.198 04:17:38 -- common/autotest_common.sh@10 -- # set +x 00:20:24.458 04:17:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:24.458 04:17:38 -- target/connect_stress.sh@34 -- # kill -0 4011885 00:20:24.458 04:17:38 -- target/connect_stress.sh@35 -- # rpc_cmd 00:20:24.458 04:17:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:24.458 04:17:38 -- common/autotest_common.sh@10 -- # set +x 00:20:24.717 04:17:39 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:24.717 04:17:39 -- target/connect_stress.sh@34 -- # kill -0 4011885 00:20:24.717 04:17:39 -- target/connect_stress.sh@35 -- # rpc_cmd 00:20:24.717 04:17:39 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:24.717 04:17:39 -- common/autotest_common.sh@10 -- # set +x 00:20:24.976 04:17:39 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:24.976 04:17:39 -- target/connect_stress.sh@34 -- # kill -0 4011885 00:20:24.976 04:17:39 -- target/connect_stress.sh@35 -- # rpc_cmd 00:20:24.976 04:17:39 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:24.976 04:17:39 -- common/autotest_common.sh@10 -- # set +x 00:20:25.547 04:17:39 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:25.547 04:17:39 -- target/connect_stress.sh@34 -- # kill -0 4011885 00:20:25.547 04:17:39 -- target/connect_stress.sh@35 -- # rpc_cmd 00:20:25.547 04:17:39 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:25.547 04:17:39 -- common/autotest_common.sh@10 -- # set +x 00:20:25.808 04:17:40 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:25.808 04:17:40 -- target/connect_stress.sh@34 -- # kill -0 4011885 00:20:25.808 04:17:40 -- target/connect_stress.sh@35 -- # rpc_cmd 00:20:25.808 04:17:40 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:25.808 04:17:40 -- common/autotest_common.sh@10 -- # set +x 00:20:26.067 04:17:40 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:26.067 04:17:40 -- target/connect_stress.sh@34 -- # kill -0 4011885 00:20:26.067 04:17:40 -- target/connect_stress.sh@35 -- # rpc_cmd 00:20:26.067 04:17:40 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:26.067 04:17:40 -- common/autotest_common.sh@10 -- # set +x 00:20:26.325 04:17:40 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:26.325 04:17:40 -- target/connect_stress.sh@34 -- # kill -0 4011885 00:20:26.325 04:17:40 -- target/connect_stress.sh@35 -- # rpc_cmd 00:20:26.325 04:17:40 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:26.325 04:17:40 -- common/autotest_common.sh@10 -- # set +x 00:20:26.583 04:17:41 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:26.583 04:17:41 -- target/connect_stress.sh@34 -- # kill -0 4011885 00:20:26.583 04:17:41 -- target/connect_stress.sh@35 -- # rpc_cmd 00:20:26.583 04:17:41 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:26.583 04:17:41 -- common/autotest_common.sh@10 -- # set +x 00:20:27.152 04:17:41 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:27.152 04:17:41 -- target/connect_stress.sh@34 -- # kill -0 4011885 00:20:27.152 04:17:41 -- target/connect_stress.sh@35 -- # rpc_cmd 00:20:27.152 04:17:41 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:27.152 04:17:41 -- common/autotest_common.sh@10 -- # set +x 00:20:27.411 04:17:41 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:27.411 04:17:41 -- target/connect_stress.sh@34 -- # kill -0 4011885 00:20:27.411 04:17:41 -- target/connect_stress.sh@35 -- # rpc_cmd 00:20:27.411 04:17:41 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:27.411 04:17:41 -- common/autotest_common.sh@10 -- # set +x 00:20:27.670 04:17:42 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:27.670 04:17:42 -- target/connect_stress.sh@34 -- # kill -0 4011885 00:20:27.670 04:17:42 -- target/connect_stress.sh@35 -- # rpc_cmd 00:20:27.670 04:17:42 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:27.670 04:17:42 -- common/autotest_common.sh@10 -- # set +x 00:20:27.929 04:17:42 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:27.929 04:17:42 -- target/connect_stress.sh@34 -- # kill -0 4011885 00:20:27.929 04:17:42 -- target/connect_stress.sh@35 -- # rpc_cmd 00:20:27.929 04:17:42 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:27.929 04:17:42 -- common/autotest_common.sh@10 -- # set +x 00:20:28.187 04:17:42 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:28.187 04:17:42 -- target/connect_stress.sh@34 -- # kill -0 4011885 00:20:28.187 04:17:42 -- target/connect_stress.sh@35 -- # rpc_cmd 00:20:28.187 04:17:42 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:28.187 04:17:42 -- common/autotest_common.sh@10 -- # set +x 00:20:28.754 04:17:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:28.754 04:17:43 -- target/connect_stress.sh@34 -- # kill -0 4011885 00:20:28.754 04:17:43 -- target/connect_stress.sh@35 -- # rpc_cmd 00:20:28.754 04:17:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:28.754 04:17:43 -- common/autotest_common.sh@10 -- # set +x 00:20:29.014 04:17:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:29.014 04:17:43 -- target/connect_stress.sh@34 -- # kill -0 4011885 00:20:29.014 04:17:43 -- target/connect_stress.sh@35 -- # rpc_cmd 00:20:29.014 04:17:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:29.014 04:17:43 -- common/autotest_common.sh@10 -- # set +x 00:20:29.274 04:17:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:29.274 04:17:43 -- target/connect_stress.sh@34 -- # kill -0 4011885 00:20:29.274 04:17:43 -- target/connect_stress.sh@35 -- # rpc_cmd 00:20:29.274 04:17:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:29.274 04:17:43 -- common/autotest_common.sh@10 -- # set +x 00:20:29.538 04:17:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:29.538 04:17:43 -- target/connect_stress.sh@34 -- # kill -0 4011885 00:20:29.538 04:17:43 -- target/connect_stress.sh@35 -- # rpc_cmd 00:20:29.538 04:17:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:29.538 04:17:43 -- common/autotest_common.sh@10 -- # set +x 00:20:29.799 04:17:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:29.799 04:17:44 -- target/connect_stress.sh@34 -- # kill -0 4011885 00:20:29.799 04:17:44 -- target/connect_stress.sh@35 -- # rpc_cmd 00:20:29.799 04:17:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:29.799 04:17:44 -- common/autotest_common.sh@10 -- # set +x 00:20:30.056 04:17:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:30.056 04:17:44 -- target/connect_stress.sh@34 -- # kill -0 4011885 00:20:30.056 04:17:44 -- target/connect_stress.sh@35 -- # rpc_cmd 00:20:30.056 04:17:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:30.056 04:17:44 -- common/autotest_common.sh@10 -- # set +x 00:20:30.314 Testing NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:30.576 04:17:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:30.576 04:17:44 -- target/connect_stress.sh@34 -- # kill -0 4011885 00:20:30.576 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/connect_stress.sh: line 34: kill: (4011885) - No such process 00:20:30.576 04:17:44 -- target/connect_stress.sh@38 -- # wait 4011885 00:20:30.576 04:17:44 -- target/connect_stress.sh@39 -- # rm -f /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:20:30.576 04:17:44 -- target/connect_stress.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:20:30.576 04:17:44 -- target/connect_stress.sh@43 -- # nvmftestfini 00:20:30.576 04:17:44 -- nvmf/common.sh@476 -- # nvmfcleanup 00:20:30.576 04:17:44 -- nvmf/common.sh@116 -- # sync 00:20:30.576 04:17:44 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:20:30.576 04:17:44 -- nvmf/common.sh@119 -- # set +e 00:20:30.576 04:17:44 -- nvmf/common.sh@120 -- # for i in {1..20} 00:20:30.576 04:17:44 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:20:30.576 rmmod nvme_tcp 00:20:30.576 rmmod nvme_fabrics 00:20:30.576 rmmod nvme_keyring 00:20:30.576 04:17:45 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:20:30.576 04:17:45 -- nvmf/common.sh@123 -- # set -e 00:20:30.576 04:17:45 -- nvmf/common.sh@124 -- # return 0 00:20:30.576 04:17:45 -- nvmf/common.sh@477 -- # '[' -n 4011566 ']' 00:20:30.576 04:17:45 -- nvmf/common.sh@478 -- # killprocess 4011566 00:20:30.576 04:17:45 -- common/autotest_common.sh@926 -- # '[' -z 4011566 ']' 00:20:30.576 04:17:45 -- common/autotest_common.sh@930 -- # kill -0 4011566 00:20:30.576 04:17:45 -- common/autotest_common.sh@931 -- # uname 00:20:30.576 04:17:45 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:20:30.576 04:17:45 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 4011566 00:20:30.576 04:17:45 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:20:30.576 04:17:45 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:20:30.576 04:17:45 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 4011566' 00:20:30.576 killing process with pid 4011566 00:20:30.576 04:17:45 -- common/autotest_common.sh@945 -- # kill 4011566 00:20:30.576 04:17:45 -- common/autotest_common.sh@950 -- # wait 4011566 00:20:31.182 04:17:45 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:20:31.182 04:17:45 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:20:31.182 04:17:45 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:20:31.182 04:17:45 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:31.182 04:17:45 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:20:31.182 04:17:45 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:31.182 04:17:45 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:31.182 04:17:45 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:33.087 04:17:47 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:20:33.087 00:20:33.087 real 0m20.594s 00:20:33.087 user 0m44.248s 00:20:33.087 sys 0m6.679s 00:20:33.087 04:17:47 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:33.087 04:17:47 -- common/autotest_common.sh@10 -- # set +x 00:20:33.087 ************************************ 00:20:33.087 END TEST nvmf_connect_stress 00:20:33.087 ************************************ 00:20:33.087 04:17:47 -- nvmf/nvmf.sh@34 -- # run_test nvmf_fused_ordering /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:20:33.087 04:17:47 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:20:33.087 04:17:47 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:20:33.087 04:17:47 -- common/autotest_common.sh@10 -- # set +x 00:20:33.087 ************************************ 00:20:33.087 START TEST nvmf_fused_ordering 00:20:33.087 ************************************ 00:20:33.087 04:17:47 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:20:33.345 * Looking for test storage... 00:20:33.345 * Found test storage at /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target 00:20:33.345 04:17:47 -- target/fused_ordering.sh@10 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/common.sh 00:20:33.345 04:17:47 -- nvmf/common.sh@7 -- # uname -s 00:20:33.345 04:17:47 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:33.345 04:17:47 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:33.345 04:17:47 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:33.345 04:17:47 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:33.345 04:17:47 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:33.345 04:17:47 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:33.345 04:17:47 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:33.345 04:17:47 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:33.345 04:17:47 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:33.345 04:17:47 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:33.345 04:17:47 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80ef6226-405e-ee11-906e-a4bf01973fda 00:20:33.345 04:17:47 -- nvmf/common.sh@18 -- # NVME_HOSTID=80ef6226-405e-ee11-906e-a4bf01973fda 00:20:33.345 04:17:47 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:33.345 04:17:47 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:33.345 04:17:47 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:20:33.345 04:17:47 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/common.sh 00:20:33.345 04:17:47 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:33.345 04:17:47 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:33.345 04:17:47 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:33.345 04:17:47 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:33.345 04:17:47 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:33.346 04:17:47 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:33.346 04:17:47 -- paths/export.sh@5 -- # export PATH 00:20:33.346 04:17:47 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:33.346 04:17:47 -- nvmf/common.sh@46 -- # : 0 00:20:33.346 04:17:47 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:20:33.346 04:17:47 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:20:33.346 04:17:47 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:20:33.346 04:17:47 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:33.346 04:17:47 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:33.346 04:17:47 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:20:33.346 04:17:47 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:20:33.346 04:17:47 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:20:33.346 04:17:47 -- target/fused_ordering.sh@12 -- # nvmftestinit 00:20:33.346 04:17:47 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:20:33.346 04:17:47 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:33.346 04:17:47 -- nvmf/common.sh@436 -- # prepare_net_devs 00:20:33.346 04:17:47 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:20:33.346 04:17:47 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:20:33.346 04:17:47 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:33.346 04:17:47 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:33.346 04:17:47 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:33.346 04:17:47 -- nvmf/common.sh@402 -- # [[ phy-fallback != virt ]] 00:20:33.346 04:17:47 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:20:33.346 04:17:47 -- nvmf/common.sh@284 -- # xtrace_disable 00:20:33.346 04:17:47 -- common/autotest_common.sh@10 -- # set +x 00:20:38.619 04:17:52 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:20:38.619 04:17:52 -- nvmf/common.sh@290 -- # pci_devs=() 00:20:38.619 04:17:52 -- nvmf/common.sh@290 -- # local -a pci_devs 00:20:38.619 04:17:52 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:20:38.619 04:17:52 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:20:38.619 04:17:52 -- nvmf/common.sh@292 -- # pci_drivers=() 00:20:38.619 04:17:52 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:20:38.619 04:17:52 -- nvmf/common.sh@294 -- # net_devs=() 00:20:38.619 04:17:52 -- nvmf/common.sh@294 -- # local -ga net_devs 00:20:38.619 04:17:52 -- nvmf/common.sh@295 -- # e810=() 00:20:38.619 04:17:52 -- nvmf/common.sh@295 -- # local -ga e810 00:20:38.619 04:17:52 -- nvmf/common.sh@296 -- # x722=() 00:20:38.619 04:17:52 -- nvmf/common.sh@296 -- # local -ga x722 00:20:38.619 04:17:52 -- nvmf/common.sh@297 -- # mlx=() 00:20:38.619 04:17:52 -- nvmf/common.sh@297 -- # local -ga mlx 00:20:38.619 04:17:52 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:38.619 04:17:52 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:38.619 04:17:52 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:38.619 04:17:52 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:38.619 04:17:52 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:38.619 04:17:52 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:38.620 04:17:52 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:38.620 04:17:52 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:38.620 04:17:52 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:38.620 04:17:52 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:38.620 04:17:52 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:38.620 04:17:52 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:20:38.620 04:17:52 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:20:38.620 04:17:52 -- nvmf/common.sh@326 -- # [[ '' == mlx5 ]] 00:20:38.620 04:17:52 -- nvmf/common.sh@328 -- # [[ '' == e810 ]] 00:20:38.620 04:17:52 -- nvmf/common.sh@330 -- # [[ '' == x722 ]] 00:20:38.620 04:17:52 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:20:38.620 04:17:52 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:20:38.620 04:17:52 -- nvmf/common.sh@340 -- # echo 'Found 0000:27:00.0 (0x8086 - 0x159b)' 00:20:38.620 Found 0000:27:00.0 (0x8086 - 0x159b) 00:20:38.620 04:17:52 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:20:38.620 04:17:52 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:20:38.620 04:17:52 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:38.620 04:17:52 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:38.620 04:17:52 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:20:38.620 04:17:52 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:20:38.620 04:17:52 -- nvmf/common.sh@340 -- # echo 'Found 0000:27:00.1 (0x8086 - 0x159b)' 00:20:38.620 Found 0000:27:00.1 (0x8086 - 0x159b) 00:20:38.620 04:17:52 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:20:38.620 04:17:52 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:20:38.620 04:17:52 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:38.620 04:17:52 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:38.620 04:17:52 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:20:38.620 04:17:52 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:20:38.620 04:17:52 -- nvmf/common.sh@371 -- # [[ '' == e810 ]] 00:20:38.620 04:17:52 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:20:38.620 04:17:52 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:38.620 04:17:52 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:20:38.620 04:17:52 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:38.620 04:17:52 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:27:00.0: cvl_0_0' 00:20:38.620 Found net devices under 0000:27:00.0: cvl_0_0 00:20:38.620 04:17:52 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:20:38.620 04:17:52 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:20:38.620 04:17:52 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:38.620 04:17:52 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:20:38.620 04:17:52 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:38.620 04:17:52 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:27:00.1: cvl_0_1' 00:20:38.620 Found net devices under 0000:27:00.1: cvl_0_1 00:20:38.620 04:17:52 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:20:38.620 04:17:52 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:20:38.620 04:17:52 -- nvmf/common.sh@402 -- # is_hw=yes 00:20:38.620 04:17:52 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:20:38.620 04:17:52 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:20:38.620 04:17:52 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:20:38.620 04:17:52 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:38.620 04:17:52 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:38.620 04:17:52 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:38.620 04:17:52 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:20:38.620 04:17:52 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:38.620 04:17:52 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:38.620 04:17:52 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:20:38.620 04:17:52 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:38.620 04:17:52 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:38.620 04:17:52 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:20:38.620 04:17:52 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:20:38.620 04:17:52 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:20:38.620 04:17:52 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:38.620 04:17:52 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:38.620 04:17:52 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:38.620 04:17:52 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:20:38.620 04:17:52 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:38.620 04:17:52 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:38.620 04:17:52 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:38.620 04:17:52 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:20:38.620 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:38.620 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.220 ms 00:20:38.620 00:20:38.620 --- 10.0.0.2 ping statistics --- 00:20:38.620 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:38.620 rtt min/avg/max/mdev = 0.220/0.220/0.220/0.000 ms 00:20:38.620 04:17:52 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:38.620 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:38.620 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.093 ms 00:20:38.620 00:20:38.620 --- 10.0.0.1 ping statistics --- 00:20:38.620 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:38.620 rtt min/avg/max/mdev = 0.093/0.093/0.093/0.000 ms 00:20:38.620 04:17:52 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:38.620 04:17:52 -- nvmf/common.sh@410 -- # return 0 00:20:38.620 04:17:52 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:20:38.620 04:17:52 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:38.620 04:17:52 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:20:38.620 04:17:52 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:20:38.620 04:17:52 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:38.620 04:17:52 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:20:38.620 04:17:52 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:20:38.620 04:17:52 -- target/fused_ordering.sh@13 -- # nvmfappstart -m 0x2 00:20:38.620 04:17:52 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:20:38.620 04:17:52 -- common/autotest_common.sh@712 -- # xtrace_disable 00:20:38.620 04:17:52 -- common/autotest_common.sh@10 -- # set +x 00:20:38.620 04:17:52 -- nvmf/common.sh@469 -- # nvmfpid=4017721 00:20:38.620 04:17:52 -- nvmf/common.sh@470 -- # waitforlisten 4017721 00:20:38.620 04:17:52 -- common/autotest_common.sh@819 -- # '[' -z 4017721 ']' 00:20:38.620 04:17:52 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:38.620 04:17:52 -- common/autotest_common.sh@824 -- # local max_retries=100 00:20:38.620 04:17:52 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:38.620 04:17:52 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:20:38.620 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:38.620 04:17:52 -- common/autotest_common.sh@828 -- # xtrace_disable 00:20:38.620 04:17:52 -- common/autotest_common.sh@10 -- # set +x 00:20:38.620 [2024-05-14 04:17:52.974694] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:20:38.620 [2024-05-14 04:17:52.974803] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:38.620 EAL: No free 2048 kB hugepages reported on node 1 00:20:38.620 [2024-05-14 04:17:53.100456] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:38.620 [2024-05-14 04:17:53.200000] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:20:38.620 [2024-05-14 04:17:53.200182] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:38.620 [2024-05-14 04:17:53.200208] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:38.620 [2024-05-14 04:17:53.200217] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:38.620 [2024-05-14 04:17:53.200243] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:20:39.186 04:17:53 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:20:39.186 04:17:53 -- common/autotest_common.sh@852 -- # return 0 00:20:39.186 04:17:53 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:20:39.186 04:17:53 -- common/autotest_common.sh@718 -- # xtrace_disable 00:20:39.186 04:17:53 -- common/autotest_common.sh@10 -- # set +x 00:20:39.186 04:17:53 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:39.186 04:17:53 -- target/fused_ordering.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:20:39.186 04:17:53 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:39.186 04:17:53 -- common/autotest_common.sh@10 -- # set +x 00:20:39.186 [2024-05-14 04:17:53.710358] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:39.186 04:17:53 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:39.186 04:17:53 -- target/fused_ordering.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:20:39.186 04:17:53 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:39.186 04:17:53 -- common/autotest_common.sh@10 -- # set +x 00:20:39.186 04:17:53 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:39.186 04:17:53 -- target/fused_ordering.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:39.186 04:17:53 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:39.186 04:17:53 -- common/autotest_common.sh@10 -- # set +x 00:20:39.186 [2024-05-14 04:17:53.726499] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:39.186 04:17:53 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:39.186 04:17:53 -- target/fused_ordering.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:20:39.186 04:17:53 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:39.186 04:17:53 -- common/autotest_common.sh@10 -- # set +x 00:20:39.186 NULL1 00:20:39.186 04:17:53 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:39.186 04:17:53 -- target/fused_ordering.sh@19 -- # rpc_cmd bdev_wait_for_examine 00:20:39.186 04:17:53 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:39.186 04:17:53 -- common/autotest_common.sh@10 -- # set +x 00:20:39.186 04:17:53 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:39.186 04:17:53 -- target/fused_ordering.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:20:39.187 04:17:53 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:39.187 04:17:53 -- common/autotest_common.sh@10 -- # set +x 00:20:39.187 04:17:53 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:39.187 04:17:53 -- target/fused_ordering.sh@22 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvme/fused_ordering/fused_ordering -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:20:39.444 [2024-05-14 04:17:53.777269] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:20:39.444 [2024-05-14 04:17:53.777317] [ DPDK EAL parameters: fused_ordering --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4017940 ] 00:20:39.444 EAL: No free 2048 kB hugepages reported on node 1 00:20:39.704 Attached to nqn.2016-06.io.spdk:cnode1 00:20:39.704 Namespace ID: 1 size: 1GB 00:20:39.704 fused_ordering(0) 00:20:39.704 fused_ordering(1) 00:20:39.704 fused_ordering(2) 00:20:39.704 fused_ordering(3) 00:20:39.704 fused_ordering(4) 00:20:39.704 fused_ordering(5) 00:20:39.704 fused_ordering(6) 00:20:39.704 fused_ordering(7) 00:20:39.704 fused_ordering(8) 00:20:39.704 fused_ordering(9) 00:20:39.704 fused_ordering(10) 00:20:39.704 fused_ordering(11) 00:20:39.704 fused_ordering(12) 00:20:39.704 fused_ordering(13) 00:20:39.704 fused_ordering(14) 00:20:39.704 fused_ordering(15) 00:20:39.704 fused_ordering(16) 00:20:39.704 fused_ordering(17) 00:20:39.704 fused_ordering(18) 00:20:39.704 fused_ordering(19) 00:20:39.704 fused_ordering(20) 00:20:39.704 fused_ordering(21) 00:20:39.704 fused_ordering(22) 00:20:39.704 fused_ordering(23) 00:20:39.704 fused_ordering(24) 00:20:39.704 fused_ordering(25) 00:20:39.704 fused_ordering(26) 00:20:39.704 fused_ordering(27) 00:20:39.704 fused_ordering(28) 00:20:39.704 fused_ordering(29) 00:20:39.704 fused_ordering(30) 00:20:39.704 fused_ordering(31) 00:20:39.704 fused_ordering(32) 00:20:39.704 fused_ordering(33) 00:20:39.704 fused_ordering(34) 00:20:39.704 fused_ordering(35) 00:20:39.704 fused_ordering(36) 00:20:39.704 fused_ordering(37) 00:20:39.704 fused_ordering(38) 00:20:39.704 fused_ordering(39) 00:20:39.704 fused_ordering(40) 00:20:39.704 fused_ordering(41) 00:20:39.704 fused_ordering(42) 00:20:39.704 fused_ordering(43) 00:20:39.704 fused_ordering(44) 00:20:39.704 fused_ordering(45) 00:20:39.704 fused_ordering(46) 00:20:39.704 fused_ordering(47) 00:20:39.704 fused_ordering(48) 00:20:39.704 fused_ordering(49) 00:20:39.704 fused_ordering(50) 00:20:39.704 fused_ordering(51) 00:20:39.704 fused_ordering(52) 00:20:39.704 fused_ordering(53) 00:20:39.704 fused_ordering(54) 00:20:39.704 fused_ordering(55) 00:20:39.704 fused_ordering(56) 00:20:39.704 fused_ordering(57) 00:20:39.704 fused_ordering(58) 00:20:39.704 fused_ordering(59) 00:20:39.704 fused_ordering(60) 00:20:39.704 fused_ordering(61) 00:20:39.704 fused_ordering(62) 00:20:39.704 fused_ordering(63) 00:20:39.704 fused_ordering(64) 00:20:39.704 fused_ordering(65) 00:20:39.704 fused_ordering(66) 00:20:39.704 fused_ordering(67) 00:20:39.704 fused_ordering(68) 00:20:39.704 fused_ordering(69) 00:20:39.704 fused_ordering(70) 00:20:39.704 fused_ordering(71) 00:20:39.704 fused_ordering(72) 00:20:39.704 fused_ordering(73) 00:20:39.704 fused_ordering(74) 00:20:39.704 fused_ordering(75) 00:20:39.704 fused_ordering(76) 00:20:39.704 fused_ordering(77) 00:20:39.704 fused_ordering(78) 00:20:39.704 fused_ordering(79) 00:20:39.704 fused_ordering(80) 00:20:39.704 fused_ordering(81) 00:20:39.704 fused_ordering(82) 00:20:39.704 fused_ordering(83) 00:20:39.704 fused_ordering(84) 00:20:39.704 fused_ordering(85) 00:20:39.704 fused_ordering(86) 00:20:39.704 fused_ordering(87) 00:20:39.704 fused_ordering(88) 00:20:39.704 fused_ordering(89) 00:20:39.704 fused_ordering(90) 00:20:39.704 fused_ordering(91) 00:20:39.704 fused_ordering(92) 00:20:39.704 fused_ordering(93) 00:20:39.704 fused_ordering(94) 00:20:39.704 fused_ordering(95) 00:20:39.704 fused_ordering(96) 00:20:39.704 fused_ordering(97) 00:20:39.704 fused_ordering(98) 00:20:39.704 fused_ordering(99) 00:20:39.704 fused_ordering(100) 00:20:39.704 fused_ordering(101) 00:20:39.704 fused_ordering(102) 00:20:39.704 fused_ordering(103) 00:20:39.704 fused_ordering(104) 00:20:39.704 fused_ordering(105) 00:20:39.704 fused_ordering(106) 00:20:39.704 fused_ordering(107) 00:20:39.704 fused_ordering(108) 00:20:39.704 fused_ordering(109) 00:20:39.704 fused_ordering(110) 00:20:39.704 fused_ordering(111) 00:20:39.704 fused_ordering(112) 00:20:39.704 fused_ordering(113) 00:20:39.704 fused_ordering(114) 00:20:39.704 fused_ordering(115) 00:20:39.704 fused_ordering(116) 00:20:39.704 fused_ordering(117) 00:20:39.704 fused_ordering(118) 00:20:39.704 fused_ordering(119) 00:20:39.704 fused_ordering(120) 00:20:39.704 fused_ordering(121) 00:20:39.704 fused_ordering(122) 00:20:39.704 fused_ordering(123) 00:20:39.704 fused_ordering(124) 00:20:39.705 fused_ordering(125) 00:20:39.705 fused_ordering(126) 00:20:39.705 fused_ordering(127) 00:20:39.705 fused_ordering(128) 00:20:39.705 fused_ordering(129) 00:20:39.705 fused_ordering(130) 00:20:39.705 fused_ordering(131) 00:20:39.705 fused_ordering(132) 00:20:39.705 fused_ordering(133) 00:20:39.705 fused_ordering(134) 00:20:39.705 fused_ordering(135) 00:20:39.705 fused_ordering(136) 00:20:39.705 fused_ordering(137) 00:20:39.705 fused_ordering(138) 00:20:39.705 fused_ordering(139) 00:20:39.705 fused_ordering(140) 00:20:39.705 fused_ordering(141) 00:20:39.705 fused_ordering(142) 00:20:39.705 fused_ordering(143) 00:20:39.705 fused_ordering(144) 00:20:39.705 fused_ordering(145) 00:20:39.705 fused_ordering(146) 00:20:39.705 fused_ordering(147) 00:20:39.705 fused_ordering(148) 00:20:39.705 fused_ordering(149) 00:20:39.705 fused_ordering(150) 00:20:39.705 fused_ordering(151) 00:20:39.705 fused_ordering(152) 00:20:39.705 fused_ordering(153) 00:20:39.705 fused_ordering(154) 00:20:39.705 fused_ordering(155) 00:20:39.705 fused_ordering(156) 00:20:39.705 fused_ordering(157) 00:20:39.705 fused_ordering(158) 00:20:39.705 fused_ordering(159) 00:20:39.705 fused_ordering(160) 00:20:39.705 fused_ordering(161) 00:20:39.705 fused_ordering(162) 00:20:39.705 fused_ordering(163) 00:20:39.705 fused_ordering(164) 00:20:39.705 fused_ordering(165) 00:20:39.705 fused_ordering(166) 00:20:39.705 fused_ordering(167) 00:20:39.705 fused_ordering(168) 00:20:39.705 fused_ordering(169) 00:20:39.705 fused_ordering(170) 00:20:39.705 fused_ordering(171) 00:20:39.705 fused_ordering(172) 00:20:39.705 fused_ordering(173) 00:20:39.705 fused_ordering(174) 00:20:39.705 fused_ordering(175) 00:20:39.705 fused_ordering(176) 00:20:39.705 fused_ordering(177) 00:20:39.705 fused_ordering(178) 00:20:39.705 fused_ordering(179) 00:20:39.705 fused_ordering(180) 00:20:39.705 fused_ordering(181) 00:20:39.705 fused_ordering(182) 00:20:39.705 fused_ordering(183) 00:20:39.705 fused_ordering(184) 00:20:39.705 fused_ordering(185) 00:20:39.705 fused_ordering(186) 00:20:39.705 fused_ordering(187) 00:20:39.705 fused_ordering(188) 00:20:39.705 fused_ordering(189) 00:20:39.705 fused_ordering(190) 00:20:39.705 fused_ordering(191) 00:20:39.705 fused_ordering(192) 00:20:39.705 fused_ordering(193) 00:20:39.705 fused_ordering(194) 00:20:39.705 fused_ordering(195) 00:20:39.705 fused_ordering(196) 00:20:39.705 fused_ordering(197) 00:20:39.705 fused_ordering(198) 00:20:39.705 fused_ordering(199) 00:20:39.705 fused_ordering(200) 00:20:39.705 fused_ordering(201) 00:20:39.705 fused_ordering(202) 00:20:39.705 fused_ordering(203) 00:20:39.705 fused_ordering(204) 00:20:39.705 fused_ordering(205) 00:20:39.965 fused_ordering(206) 00:20:39.965 fused_ordering(207) 00:20:39.965 fused_ordering(208) 00:20:39.965 fused_ordering(209) 00:20:39.965 fused_ordering(210) 00:20:39.965 fused_ordering(211) 00:20:39.965 fused_ordering(212) 00:20:39.966 fused_ordering(213) 00:20:39.966 fused_ordering(214) 00:20:39.966 fused_ordering(215) 00:20:39.966 fused_ordering(216) 00:20:39.966 fused_ordering(217) 00:20:39.966 fused_ordering(218) 00:20:39.966 fused_ordering(219) 00:20:39.966 fused_ordering(220) 00:20:39.966 fused_ordering(221) 00:20:39.966 fused_ordering(222) 00:20:39.966 fused_ordering(223) 00:20:39.966 fused_ordering(224) 00:20:39.966 fused_ordering(225) 00:20:39.966 fused_ordering(226) 00:20:39.966 fused_ordering(227) 00:20:39.966 fused_ordering(228) 00:20:39.966 fused_ordering(229) 00:20:39.966 fused_ordering(230) 00:20:39.966 fused_ordering(231) 00:20:39.966 fused_ordering(232) 00:20:39.966 fused_ordering(233) 00:20:39.966 fused_ordering(234) 00:20:39.966 fused_ordering(235) 00:20:39.966 fused_ordering(236) 00:20:39.966 fused_ordering(237) 00:20:39.966 fused_ordering(238) 00:20:39.966 fused_ordering(239) 00:20:39.966 fused_ordering(240) 00:20:39.966 fused_ordering(241) 00:20:39.966 fused_ordering(242) 00:20:39.966 fused_ordering(243) 00:20:39.966 fused_ordering(244) 00:20:39.966 fused_ordering(245) 00:20:39.966 fused_ordering(246) 00:20:39.966 fused_ordering(247) 00:20:39.966 fused_ordering(248) 00:20:39.966 fused_ordering(249) 00:20:39.966 fused_ordering(250) 00:20:39.966 fused_ordering(251) 00:20:39.966 fused_ordering(252) 00:20:39.966 fused_ordering(253) 00:20:39.966 fused_ordering(254) 00:20:39.966 fused_ordering(255) 00:20:39.966 fused_ordering(256) 00:20:39.966 fused_ordering(257) 00:20:39.966 fused_ordering(258) 00:20:39.966 fused_ordering(259) 00:20:39.966 fused_ordering(260) 00:20:39.966 fused_ordering(261) 00:20:39.966 fused_ordering(262) 00:20:39.966 fused_ordering(263) 00:20:39.966 fused_ordering(264) 00:20:39.966 fused_ordering(265) 00:20:39.966 fused_ordering(266) 00:20:39.966 fused_ordering(267) 00:20:39.966 fused_ordering(268) 00:20:39.966 fused_ordering(269) 00:20:39.966 fused_ordering(270) 00:20:39.966 fused_ordering(271) 00:20:39.966 fused_ordering(272) 00:20:39.966 fused_ordering(273) 00:20:39.966 fused_ordering(274) 00:20:39.966 fused_ordering(275) 00:20:39.966 fused_ordering(276) 00:20:39.966 fused_ordering(277) 00:20:39.966 fused_ordering(278) 00:20:39.966 fused_ordering(279) 00:20:39.966 fused_ordering(280) 00:20:39.966 fused_ordering(281) 00:20:39.966 fused_ordering(282) 00:20:39.966 fused_ordering(283) 00:20:39.966 fused_ordering(284) 00:20:39.966 fused_ordering(285) 00:20:39.966 fused_ordering(286) 00:20:39.966 fused_ordering(287) 00:20:39.966 fused_ordering(288) 00:20:39.966 fused_ordering(289) 00:20:39.966 fused_ordering(290) 00:20:39.966 fused_ordering(291) 00:20:39.966 fused_ordering(292) 00:20:39.966 fused_ordering(293) 00:20:39.966 fused_ordering(294) 00:20:39.966 fused_ordering(295) 00:20:39.966 fused_ordering(296) 00:20:39.966 fused_ordering(297) 00:20:39.966 fused_ordering(298) 00:20:39.966 fused_ordering(299) 00:20:39.966 fused_ordering(300) 00:20:39.966 fused_ordering(301) 00:20:39.966 fused_ordering(302) 00:20:39.966 fused_ordering(303) 00:20:39.966 fused_ordering(304) 00:20:39.966 fused_ordering(305) 00:20:39.966 fused_ordering(306) 00:20:39.966 fused_ordering(307) 00:20:39.966 fused_ordering(308) 00:20:39.966 fused_ordering(309) 00:20:39.966 fused_ordering(310) 00:20:39.966 fused_ordering(311) 00:20:39.966 fused_ordering(312) 00:20:39.966 fused_ordering(313) 00:20:39.966 fused_ordering(314) 00:20:39.966 fused_ordering(315) 00:20:39.966 fused_ordering(316) 00:20:39.966 fused_ordering(317) 00:20:39.966 fused_ordering(318) 00:20:39.966 fused_ordering(319) 00:20:39.966 fused_ordering(320) 00:20:39.966 fused_ordering(321) 00:20:39.966 fused_ordering(322) 00:20:39.966 fused_ordering(323) 00:20:39.966 fused_ordering(324) 00:20:39.966 fused_ordering(325) 00:20:39.966 fused_ordering(326) 00:20:39.966 fused_ordering(327) 00:20:39.966 fused_ordering(328) 00:20:39.966 fused_ordering(329) 00:20:39.966 fused_ordering(330) 00:20:39.966 fused_ordering(331) 00:20:39.966 fused_ordering(332) 00:20:39.966 fused_ordering(333) 00:20:39.966 fused_ordering(334) 00:20:39.966 fused_ordering(335) 00:20:39.966 fused_ordering(336) 00:20:39.966 fused_ordering(337) 00:20:39.966 fused_ordering(338) 00:20:39.966 fused_ordering(339) 00:20:39.966 fused_ordering(340) 00:20:39.966 fused_ordering(341) 00:20:39.966 fused_ordering(342) 00:20:39.966 fused_ordering(343) 00:20:39.966 fused_ordering(344) 00:20:39.966 fused_ordering(345) 00:20:39.966 fused_ordering(346) 00:20:39.966 fused_ordering(347) 00:20:39.966 fused_ordering(348) 00:20:39.966 fused_ordering(349) 00:20:39.966 fused_ordering(350) 00:20:39.966 fused_ordering(351) 00:20:39.966 fused_ordering(352) 00:20:39.966 fused_ordering(353) 00:20:39.966 fused_ordering(354) 00:20:39.966 fused_ordering(355) 00:20:39.966 fused_ordering(356) 00:20:39.966 fused_ordering(357) 00:20:39.966 fused_ordering(358) 00:20:39.966 fused_ordering(359) 00:20:39.966 fused_ordering(360) 00:20:39.966 fused_ordering(361) 00:20:39.966 fused_ordering(362) 00:20:39.966 fused_ordering(363) 00:20:39.966 fused_ordering(364) 00:20:39.966 fused_ordering(365) 00:20:39.966 fused_ordering(366) 00:20:39.966 fused_ordering(367) 00:20:39.966 fused_ordering(368) 00:20:39.966 fused_ordering(369) 00:20:39.966 fused_ordering(370) 00:20:39.966 fused_ordering(371) 00:20:39.966 fused_ordering(372) 00:20:39.966 fused_ordering(373) 00:20:39.966 fused_ordering(374) 00:20:39.966 fused_ordering(375) 00:20:39.966 fused_ordering(376) 00:20:39.966 fused_ordering(377) 00:20:39.966 fused_ordering(378) 00:20:39.966 fused_ordering(379) 00:20:39.966 fused_ordering(380) 00:20:39.966 fused_ordering(381) 00:20:39.966 fused_ordering(382) 00:20:39.966 fused_ordering(383) 00:20:39.966 fused_ordering(384) 00:20:39.966 fused_ordering(385) 00:20:39.966 fused_ordering(386) 00:20:39.966 fused_ordering(387) 00:20:39.966 fused_ordering(388) 00:20:39.966 fused_ordering(389) 00:20:39.966 fused_ordering(390) 00:20:39.966 fused_ordering(391) 00:20:39.966 fused_ordering(392) 00:20:39.966 fused_ordering(393) 00:20:39.966 fused_ordering(394) 00:20:39.966 fused_ordering(395) 00:20:39.966 fused_ordering(396) 00:20:39.966 fused_ordering(397) 00:20:39.966 fused_ordering(398) 00:20:39.966 fused_ordering(399) 00:20:39.966 fused_ordering(400) 00:20:39.966 fused_ordering(401) 00:20:39.966 fused_ordering(402) 00:20:39.966 fused_ordering(403) 00:20:39.966 fused_ordering(404) 00:20:39.966 fused_ordering(405) 00:20:39.966 fused_ordering(406) 00:20:39.966 fused_ordering(407) 00:20:39.966 fused_ordering(408) 00:20:39.966 fused_ordering(409) 00:20:39.966 fused_ordering(410) 00:20:40.226 fused_ordering(411) 00:20:40.226 fused_ordering(412) 00:20:40.226 fused_ordering(413) 00:20:40.226 fused_ordering(414) 00:20:40.226 fused_ordering(415) 00:20:40.226 fused_ordering(416) 00:20:40.226 fused_ordering(417) 00:20:40.226 fused_ordering(418) 00:20:40.226 fused_ordering(419) 00:20:40.226 fused_ordering(420) 00:20:40.226 fused_ordering(421) 00:20:40.226 fused_ordering(422) 00:20:40.226 fused_ordering(423) 00:20:40.226 fused_ordering(424) 00:20:40.226 fused_ordering(425) 00:20:40.226 fused_ordering(426) 00:20:40.226 fused_ordering(427) 00:20:40.226 fused_ordering(428) 00:20:40.226 fused_ordering(429) 00:20:40.226 fused_ordering(430) 00:20:40.226 fused_ordering(431) 00:20:40.226 fused_ordering(432) 00:20:40.226 fused_ordering(433) 00:20:40.226 fused_ordering(434) 00:20:40.226 fused_ordering(435) 00:20:40.226 fused_ordering(436) 00:20:40.226 fused_ordering(437) 00:20:40.226 fused_ordering(438) 00:20:40.226 fused_ordering(439) 00:20:40.226 fused_ordering(440) 00:20:40.226 fused_ordering(441) 00:20:40.226 fused_ordering(442) 00:20:40.226 fused_ordering(443) 00:20:40.226 fused_ordering(444) 00:20:40.226 fused_ordering(445) 00:20:40.226 fused_ordering(446) 00:20:40.226 fused_ordering(447) 00:20:40.226 fused_ordering(448) 00:20:40.226 fused_ordering(449) 00:20:40.226 fused_ordering(450) 00:20:40.226 fused_ordering(451) 00:20:40.226 fused_ordering(452) 00:20:40.226 fused_ordering(453) 00:20:40.226 fused_ordering(454) 00:20:40.226 fused_ordering(455) 00:20:40.226 fused_ordering(456) 00:20:40.226 fused_ordering(457) 00:20:40.226 fused_ordering(458) 00:20:40.226 fused_ordering(459) 00:20:40.226 fused_ordering(460) 00:20:40.226 fused_ordering(461) 00:20:40.226 fused_ordering(462) 00:20:40.226 fused_ordering(463) 00:20:40.226 fused_ordering(464) 00:20:40.226 fused_ordering(465) 00:20:40.226 fused_ordering(466) 00:20:40.226 fused_ordering(467) 00:20:40.226 fused_ordering(468) 00:20:40.226 fused_ordering(469) 00:20:40.226 fused_ordering(470) 00:20:40.226 fused_ordering(471) 00:20:40.226 fused_ordering(472) 00:20:40.226 fused_ordering(473) 00:20:40.226 fused_ordering(474) 00:20:40.226 fused_ordering(475) 00:20:40.226 fused_ordering(476) 00:20:40.226 fused_ordering(477) 00:20:40.226 fused_ordering(478) 00:20:40.226 fused_ordering(479) 00:20:40.226 fused_ordering(480) 00:20:40.226 fused_ordering(481) 00:20:40.226 fused_ordering(482) 00:20:40.226 fused_ordering(483) 00:20:40.226 fused_ordering(484) 00:20:40.226 fused_ordering(485) 00:20:40.226 fused_ordering(486) 00:20:40.226 fused_ordering(487) 00:20:40.226 fused_ordering(488) 00:20:40.226 fused_ordering(489) 00:20:40.226 fused_ordering(490) 00:20:40.226 fused_ordering(491) 00:20:40.226 fused_ordering(492) 00:20:40.226 fused_ordering(493) 00:20:40.226 fused_ordering(494) 00:20:40.226 fused_ordering(495) 00:20:40.226 fused_ordering(496) 00:20:40.226 fused_ordering(497) 00:20:40.226 fused_ordering(498) 00:20:40.226 fused_ordering(499) 00:20:40.226 fused_ordering(500) 00:20:40.226 fused_ordering(501) 00:20:40.226 fused_ordering(502) 00:20:40.226 fused_ordering(503) 00:20:40.226 fused_ordering(504) 00:20:40.226 fused_ordering(505) 00:20:40.226 fused_ordering(506) 00:20:40.226 fused_ordering(507) 00:20:40.226 fused_ordering(508) 00:20:40.226 fused_ordering(509) 00:20:40.226 fused_ordering(510) 00:20:40.226 fused_ordering(511) 00:20:40.226 fused_ordering(512) 00:20:40.226 fused_ordering(513) 00:20:40.226 fused_ordering(514) 00:20:40.226 fused_ordering(515) 00:20:40.226 fused_ordering(516) 00:20:40.226 fused_ordering(517) 00:20:40.226 fused_ordering(518) 00:20:40.226 fused_ordering(519) 00:20:40.226 fused_ordering(520) 00:20:40.226 fused_ordering(521) 00:20:40.226 fused_ordering(522) 00:20:40.226 fused_ordering(523) 00:20:40.226 fused_ordering(524) 00:20:40.226 fused_ordering(525) 00:20:40.226 fused_ordering(526) 00:20:40.226 fused_ordering(527) 00:20:40.226 fused_ordering(528) 00:20:40.226 fused_ordering(529) 00:20:40.226 fused_ordering(530) 00:20:40.226 fused_ordering(531) 00:20:40.226 fused_ordering(532) 00:20:40.226 fused_ordering(533) 00:20:40.226 fused_ordering(534) 00:20:40.226 fused_ordering(535) 00:20:40.226 fused_ordering(536) 00:20:40.226 fused_ordering(537) 00:20:40.226 fused_ordering(538) 00:20:40.226 fused_ordering(539) 00:20:40.226 fused_ordering(540) 00:20:40.226 fused_ordering(541) 00:20:40.226 fused_ordering(542) 00:20:40.226 fused_ordering(543) 00:20:40.226 fused_ordering(544) 00:20:40.226 fused_ordering(545) 00:20:40.226 fused_ordering(546) 00:20:40.226 fused_ordering(547) 00:20:40.226 fused_ordering(548) 00:20:40.226 fused_ordering(549) 00:20:40.226 fused_ordering(550) 00:20:40.226 fused_ordering(551) 00:20:40.226 fused_ordering(552) 00:20:40.226 fused_ordering(553) 00:20:40.226 fused_ordering(554) 00:20:40.226 fused_ordering(555) 00:20:40.227 fused_ordering(556) 00:20:40.227 fused_ordering(557) 00:20:40.227 fused_ordering(558) 00:20:40.227 fused_ordering(559) 00:20:40.227 fused_ordering(560) 00:20:40.227 fused_ordering(561) 00:20:40.227 fused_ordering(562) 00:20:40.227 fused_ordering(563) 00:20:40.227 fused_ordering(564) 00:20:40.227 fused_ordering(565) 00:20:40.227 fused_ordering(566) 00:20:40.227 fused_ordering(567) 00:20:40.227 fused_ordering(568) 00:20:40.227 fused_ordering(569) 00:20:40.227 fused_ordering(570) 00:20:40.227 fused_ordering(571) 00:20:40.227 fused_ordering(572) 00:20:40.227 fused_ordering(573) 00:20:40.227 fused_ordering(574) 00:20:40.227 fused_ordering(575) 00:20:40.227 fused_ordering(576) 00:20:40.227 fused_ordering(577) 00:20:40.227 fused_ordering(578) 00:20:40.227 fused_ordering(579) 00:20:40.227 fused_ordering(580) 00:20:40.227 fused_ordering(581) 00:20:40.227 fused_ordering(582) 00:20:40.227 fused_ordering(583) 00:20:40.227 fused_ordering(584) 00:20:40.227 fused_ordering(585) 00:20:40.227 fused_ordering(586) 00:20:40.227 fused_ordering(587) 00:20:40.227 fused_ordering(588) 00:20:40.227 fused_ordering(589) 00:20:40.227 fused_ordering(590) 00:20:40.227 fused_ordering(591) 00:20:40.227 fused_ordering(592) 00:20:40.227 fused_ordering(593) 00:20:40.227 fused_ordering(594) 00:20:40.227 fused_ordering(595) 00:20:40.227 fused_ordering(596) 00:20:40.227 fused_ordering(597) 00:20:40.227 fused_ordering(598) 00:20:40.227 fused_ordering(599) 00:20:40.227 fused_ordering(600) 00:20:40.227 fused_ordering(601) 00:20:40.227 fused_ordering(602) 00:20:40.227 fused_ordering(603) 00:20:40.227 fused_ordering(604) 00:20:40.227 fused_ordering(605) 00:20:40.227 fused_ordering(606) 00:20:40.227 fused_ordering(607) 00:20:40.227 fused_ordering(608) 00:20:40.227 fused_ordering(609) 00:20:40.227 fused_ordering(610) 00:20:40.227 fused_ordering(611) 00:20:40.227 fused_ordering(612) 00:20:40.227 fused_ordering(613) 00:20:40.227 fused_ordering(614) 00:20:40.227 fused_ordering(615) 00:20:40.486 fused_ordering(616) 00:20:40.486 fused_ordering(617) 00:20:40.486 fused_ordering(618) 00:20:40.486 fused_ordering(619) 00:20:40.486 fused_ordering(620) 00:20:40.486 fused_ordering(621) 00:20:40.486 fused_ordering(622) 00:20:40.486 fused_ordering(623) 00:20:40.486 fused_ordering(624) 00:20:40.486 fused_ordering(625) 00:20:40.486 fused_ordering(626) 00:20:40.486 fused_ordering(627) 00:20:40.486 fused_ordering(628) 00:20:40.486 fused_ordering(629) 00:20:40.486 fused_ordering(630) 00:20:40.486 fused_ordering(631) 00:20:40.486 fused_ordering(632) 00:20:40.486 fused_ordering(633) 00:20:40.486 fused_ordering(634) 00:20:40.486 fused_ordering(635) 00:20:40.486 fused_ordering(636) 00:20:40.486 fused_ordering(637) 00:20:40.486 fused_ordering(638) 00:20:40.486 fused_ordering(639) 00:20:40.486 fused_ordering(640) 00:20:40.486 fused_ordering(641) 00:20:40.486 fused_ordering(642) 00:20:40.486 fused_ordering(643) 00:20:40.486 fused_ordering(644) 00:20:40.486 fused_ordering(645) 00:20:40.486 fused_ordering(646) 00:20:40.486 fused_ordering(647) 00:20:40.486 fused_ordering(648) 00:20:40.486 fused_ordering(649) 00:20:40.486 fused_ordering(650) 00:20:40.486 fused_ordering(651) 00:20:40.486 fused_ordering(652) 00:20:40.486 fused_ordering(653) 00:20:40.486 fused_ordering(654) 00:20:40.486 fused_ordering(655) 00:20:40.486 fused_ordering(656) 00:20:40.486 fused_ordering(657) 00:20:40.486 fused_ordering(658) 00:20:40.486 fused_ordering(659) 00:20:40.486 fused_ordering(660) 00:20:40.486 fused_ordering(661) 00:20:40.486 fused_ordering(662) 00:20:40.486 fused_ordering(663) 00:20:40.486 fused_ordering(664) 00:20:40.486 fused_ordering(665) 00:20:40.486 fused_ordering(666) 00:20:40.486 fused_ordering(667) 00:20:40.486 fused_ordering(668) 00:20:40.486 fused_ordering(669) 00:20:40.486 fused_ordering(670) 00:20:40.486 fused_ordering(671) 00:20:40.486 fused_ordering(672) 00:20:40.486 fused_ordering(673) 00:20:40.486 fused_ordering(674) 00:20:40.486 fused_ordering(675) 00:20:40.486 fused_ordering(676) 00:20:40.486 fused_ordering(677) 00:20:40.486 fused_ordering(678) 00:20:40.486 fused_ordering(679) 00:20:40.486 fused_ordering(680) 00:20:40.486 fused_ordering(681) 00:20:40.486 fused_ordering(682) 00:20:40.486 fused_ordering(683) 00:20:40.486 fused_ordering(684) 00:20:40.486 fused_ordering(685) 00:20:40.486 fused_ordering(686) 00:20:40.486 fused_ordering(687) 00:20:40.486 fused_ordering(688) 00:20:40.486 fused_ordering(689) 00:20:40.486 fused_ordering(690) 00:20:40.486 fused_ordering(691) 00:20:40.486 fused_ordering(692) 00:20:40.486 fused_ordering(693) 00:20:40.486 fused_ordering(694) 00:20:40.486 fused_ordering(695) 00:20:40.486 fused_ordering(696) 00:20:40.486 fused_ordering(697) 00:20:40.486 fused_ordering(698) 00:20:40.486 fused_ordering(699) 00:20:40.486 fused_ordering(700) 00:20:40.486 fused_ordering(701) 00:20:40.486 fused_ordering(702) 00:20:40.486 fused_ordering(703) 00:20:40.486 fused_ordering(704) 00:20:40.486 fused_ordering(705) 00:20:40.486 fused_ordering(706) 00:20:40.486 fused_ordering(707) 00:20:40.486 fused_ordering(708) 00:20:40.486 fused_ordering(709) 00:20:40.486 fused_ordering(710) 00:20:40.486 fused_ordering(711) 00:20:40.486 fused_ordering(712) 00:20:40.486 fused_ordering(713) 00:20:40.486 fused_ordering(714) 00:20:40.486 fused_ordering(715) 00:20:40.486 fused_ordering(716) 00:20:40.486 fused_ordering(717) 00:20:40.486 fused_ordering(718) 00:20:40.486 fused_ordering(719) 00:20:40.486 fused_ordering(720) 00:20:40.486 fused_ordering(721) 00:20:40.486 fused_ordering(722) 00:20:40.486 fused_ordering(723) 00:20:40.486 fused_ordering(724) 00:20:40.486 fused_ordering(725) 00:20:40.486 fused_ordering(726) 00:20:40.486 fused_ordering(727) 00:20:40.486 fused_ordering(728) 00:20:40.486 fused_ordering(729) 00:20:40.486 fused_ordering(730) 00:20:40.486 fused_ordering(731) 00:20:40.486 fused_ordering(732) 00:20:40.486 fused_ordering(733) 00:20:40.486 fused_ordering(734) 00:20:40.486 fused_ordering(735) 00:20:40.486 fused_ordering(736) 00:20:40.486 fused_ordering(737) 00:20:40.486 fused_ordering(738) 00:20:40.486 fused_ordering(739) 00:20:40.486 fused_ordering(740) 00:20:40.486 fused_ordering(741) 00:20:40.486 fused_ordering(742) 00:20:40.486 fused_ordering(743) 00:20:40.486 fused_ordering(744) 00:20:40.486 fused_ordering(745) 00:20:40.486 fused_ordering(746) 00:20:40.486 fused_ordering(747) 00:20:40.486 fused_ordering(748) 00:20:40.486 fused_ordering(749) 00:20:40.486 fused_ordering(750) 00:20:40.486 fused_ordering(751) 00:20:40.486 fused_ordering(752) 00:20:40.486 fused_ordering(753) 00:20:40.486 fused_ordering(754) 00:20:40.486 fused_ordering(755) 00:20:40.486 fused_ordering(756) 00:20:40.486 fused_ordering(757) 00:20:40.486 fused_ordering(758) 00:20:40.486 fused_ordering(759) 00:20:40.486 fused_ordering(760) 00:20:40.486 fused_ordering(761) 00:20:40.486 fused_ordering(762) 00:20:40.486 fused_ordering(763) 00:20:40.486 fused_ordering(764) 00:20:40.486 fused_ordering(765) 00:20:40.486 fused_ordering(766) 00:20:40.486 fused_ordering(767) 00:20:40.486 fused_ordering(768) 00:20:40.486 fused_ordering(769) 00:20:40.486 fused_ordering(770) 00:20:40.486 fused_ordering(771) 00:20:40.486 fused_ordering(772) 00:20:40.486 fused_ordering(773) 00:20:40.486 fused_ordering(774) 00:20:40.486 fused_ordering(775) 00:20:40.486 fused_ordering(776) 00:20:40.486 fused_ordering(777) 00:20:40.486 fused_ordering(778) 00:20:40.486 fused_ordering(779) 00:20:40.486 fused_ordering(780) 00:20:40.486 fused_ordering(781) 00:20:40.486 fused_ordering(782) 00:20:40.486 fused_ordering(783) 00:20:40.486 fused_ordering(784) 00:20:40.486 fused_ordering(785) 00:20:40.486 fused_ordering(786) 00:20:40.486 fused_ordering(787) 00:20:40.486 fused_ordering(788) 00:20:40.486 fused_ordering(789) 00:20:40.486 fused_ordering(790) 00:20:40.486 fused_ordering(791) 00:20:40.486 fused_ordering(792) 00:20:40.486 fused_ordering(793) 00:20:40.486 fused_ordering(794) 00:20:40.486 fused_ordering(795) 00:20:40.486 fused_ordering(796) 00:20:40.486 fused_ordering(797) 00:20:40.486 fused_ordering(798) 00:20:40.486 fused_ordering(799) 00:20:40.486 fused_ordering(800) 00:20:40.486 fused_ordering(801) 00:20:40.486 fused_ordering(802) 00:20:40.486 fused_ordering(803) 00:20:40.486 fused_ordering(804) 00:20:40.486 fused_ordering(805) 00:20:40.486 fused_ordering(806) 00:20:40.486 fused_ordering(807) 00:20:40.486 fused_ordering(808) 00:20:40.486 fused_ordering(809) 00:20:40.486 fused_ordering(810) 00:20:40.486 fused_ordering(811) 00:20:40.486 fused_ordering(812) 00:20:40.486 fused_ordering(813) 00:20:40.486 fused_ordering(814) 00:20:40.486 fused_ordering(815) 00:20:40.486 fused_ordering(816) 00:20:40.486 fused_ordering(817) 00:20:40.486 fused_ordering(818) 00:20:40.486 fused_ordering(819) 00:20:40.486 fused_ordering(820) 00:20:41.053 fused_ordering(821) 00:20:41.053 fused_ordering(822) 00:20:41.053 fused_ordering(823) 00:20:41.053 fused_ordering(824) 00:20:41.053 fused_ordering(825) 00:20:41.053 fused_ordering(826) 00:20:41.053 fused_ordering(827) 00:20:41.053 fused_ordering(828) 00:20:41.053 fused_ordering(829) 00:20:41.053 fused_ordering(830) 00:20:41.053 fused_ordering(831) 00:20:41.053 fused_ordering(832) 00:20:41.053 fused_ordering(833) 00:20:41.053 fused_ordering(834) 00:20:41.053 fused_ordering(835) 00:20:41.053 fused_ordering(836) 00:20:41.053 fused_ordering(837) 00:20:41.053 fused_ordering(838) 00:20:41.053 fused_ordering(839) 00:20:41.053 fused_ordering(840) 00:20:41.053 fused_ordering(841) 00:20:41.053 fused_ordering(842) 00:20:41.053 fused_ordering(843) 00:20:41.053 fused_ordering(844) 00:20:41.053 fused_ordering(845) 00:20:41.053 fused_ordering(846) 00:20:41.053 fused_ordering(847) 00:20:41.053 fused_ordering(848) 00:20:41.053 fused_ordering(849) 00:20:41.054 fused_ordering(850) 00:20:41.054 fused_ordering(851) 00:20:41.054 fused_ordering(852) 00:20:41.054 fused_ordering(853) 00:20:41.054 fused_ordering(854) 00:20:41.054 fused_ordering(855) 00:20:41.054 fused_ordering(856) 00:20:41.054 fused_ordering(857) 00:20:41.054 fused_ordering(858) 00:20:41.054 fused_ordering(859) 00:20:41.054 fused_ordering(860) 00:20:41.054 fused_ordering(861) 00:20:41.054 fused_ordering(862) 00:20:41.054 fused_ordering(863) 00:20:41.054 fused_ordering(864) 00:20:41.054 fused_ordering(865) 00:20:41.054 fused_ordering(866) 00:20:41.054 fused_ordering(867) 00:20:41.054 fused_ordering(868) 00:20:41.054 fused_ordering(869) 00:20:41.054 fused_ordering(870) 00:20:41.054 fused_ordering(871) 00:20:41.054 fused_ordering(872) 00:20:41.054 fused_ordering(873) 00:20:41.054 fused_ordering(874) 00:20:41.054 fused_ordering(875) 00:20:41.054 fused_ordering(876) 00:20:41.054 fused_ordering(877) 00:20:41.054 fused_ordering(878) 00:20:41.054 fused_ordering(879) 00:20:41.054 fused_ordering(880) 00:20:41.054 fused_ordering(881) 00:20:41.054 fused_ordering(882) 00:20:41.054 fused_ordering(883) 00:20:41.054 fused_ordering(884) 00:20:41.054 fused_ordering(885) 00:20:41.054 fused_ordering(886) 00:20:41.054 fused_ordering(887) 00:20:41.054 fused_ordering(888) 00:20:41.054 fused_ordering(889) 00:20:41.054 fused_ordering(890) 00:20:41.054 fused_ordering(891) 00:20:41.054 fused_ordering(892) 00:20:41.054 fused_ordering(893) 00:20:41.054 fused_ordering(894) 00:20:41.054 fused_ordering(895) 00:20:41.054 fused_ordering(896) 00:20:41.054 fused_ordering(897) 00:20:41.054 fused_ordering(898) 00:20:41.054 fused_ordering(899) 00:20:41.054 fused_ordering(900) 00:20:41.054 fused_ordering(901) 00:20:41.054 fused_ordering(902) 00:20:41.054 fused_ordering(903) 00:20:41.054 fused_ordering(904) 00:20:41.054 fused_ordering(905) 00:20:41.054 fused_ordering(906) 00:20:41.054 fused_ordering(907) 00:20:41.054 fused_ordering(908) 00:20:41.054 fused_ordering(909) 00:20:41.054 fused_ordering(910) 00:20:41.054 fused_ordering(911) 00:20:41.054 fused_ordering(912) 00:20:41.054 fused_ordering(913) 00:20:41.054 fused_ordering(914) 00:20:41.054 fused_ordering(915) 00:20:41.054 fused_ordering(916) 00:20:41.054 fused_ordering(917) 00:20:41.054 fused_ordering(918) 00:20:41.054 fused_ordering(919) 00:20:41.054 fused_ordering(920) 00:20:41.054 fused_ordering(921) 00:20:41.054 fused_ordering(922) 00:20:41.054 fused_ordering(923) 00:20:41.054 fused_ordering(924) 00:20:41.054 fused_ordering(925) 00:20:41.054 fused_ordering(926) 00:20:41.054 fused_ordering(927) 00:20:41.054 fused_ordering(928) 00:20:41.054 fused_ordering(929) 00:20:41.054 fused_ordering(930) 00:20:41.054 fused_ordering(931) 00:20:41.054 fused_ordering(932) 00:20:41.054 fused_ordering(933) 00:20:41.054 fused_ordering(934) 00:20:41.054 fused_ordering(935) 00:20:41.054 fused_ordering(936) 00:20:41.054 fused_ordering(937) 00:20:41.054 fused_ordering(938) 00:20:41.054 fused_ordering(939) 00:20:41.054 fused_ordering(940) 00:20:41.054 fused_ordering(941) 00:20:41.054 fused_ordering(942) 00:20:41.054 fused_ordering(943) 00:20:41.054 fused_ordering(944) 00:20:41.054 fused_ordering(945) 00:20:41.054 fused_ordering(946) 00:20:41.054 fused_ordering(947) 00:20:41.054 fused_ordering(948) 00:20:41.054 fused_ordering(949) 00:20:41.054 fused_ordering(950) 00:20:41.054 fused_ordering(951) 00:20:41.054 fused_ordering(952) 00:20:41.054 fused_ordering(953) 00:20:41.054 fused_ordering(954) 00:20:41.054 fused_ordering(955) 00:20:41.054 fused_ordering(956) 00:20:41.054 fused_ordering(957) 00:20:41.054 fused_ordering(958) 00:20:41.054 fused_ordering(959) 00:20:41.054 fused_ordering(960) 00:20:41.054 fused_ordering(961) 00:20:41.054 fused_ordering(962) 00:20:41.054 fused_ordering(963) 00:20:41.054 fused_ordering(964) 00:20:41.054 fused_ordering(965) 00:20:41.054 fused_ordering(966) 00:20:41.054 fused_ordering(967) 00:20:41.054 fused_ordering(968) 00:20:41.054 fused_ordering(969) 00:20:41.054 fused_ordering(970) 00:20:41.054 fused_ordering(971) 00:20:41.054 fused_ordering(972) 00:20:41.054 fused_ordering(973) 00:20:41.054 fused_ordering(974) 00:20:41.054 fused_ordering(975) 00:20:41.054 fused_ordering(976) 00:20:41.054 fused_ordering(977) 00:20:41.054 fused_ordering(978) 00:20:41.054 fused_ordering(979) 00:20:41.054 fused_ordering(980) 00:20:41.054 fused_ordering(981) 00:20:41.054 fused_ordering(982) 00:20:41.054 fused_ordering(983) 00:20:41.054 fused_ordering(984) 00:20:41.054 fused_ordering(985) 00:20:41.054 fused_ordering(986) 00:20:41.054 fused_ordering(987) 00:20:41.054 fused_ordering(988) 00:20:41.054 fused_ordering(989) 00:20:41.054 fused_ordering(990) 00:20:41.054 fused_ordering(991) 00:20:41.054 fused_ordering(992) 00:20:41.054 fused_ordering(993) 00:20:41.054 fused_ordering(994) 00:20:41.054 fused_ordering(995) 00:20:41.054 fused_ordering(996) 00:20:41.054 fused_ordering(997) 00:20:41.054 fused_ordering(998) 00:20:41.054 fused_ordering(999) 00:20:41.054 fused_ordering(1000) 00:20:41.054 fused_ordering(1001) 00:20:41.054 fused_ordering(1002) 00:20:41.054 fused_ordering(1003) 00:20:41.054 fused_ordering(1004) 00:20:41.054 fused_ordering(1005) 00:20:41.054 fused_ordering(1006) 00:20:41.054 fused_ordering(1007) 00:20:41.054 fused_ordering(1008) 00:20:41.054 fused_ordering(1009) 00:20:41.054 fused_ordering(1010) 00:20:41.054 fused_ordering(1011) 00:20:41.054 fused_ordering(1012) 00:20:41.054 fused_ordering(1013) 00:20:41.054 fused_ordering(1014) 00:20:41.054 fused_ordering(1015) 00:20:41.054 fused_ordering(1016) 00:20:41.054 fused_ordering(1017) 00:20:41.054 fused_ordering(1018) 00:20:41.054 fused_ordering(1019) 00:20:41.054 fused_ordering(1020) 00:20:41.054 fused_ordering(1021) 00:20:41.054 fused_ordering(1022) 00:20:41.054 fused_ordering(1023) 00:20:41.054 04:17:55 -- target/fused_ordering.sh@23 -- # trap - SIGINT SIGTERM EXIT 00:20:41.054 04:17:55 -- target/fused_ordering.sh@25 -- # nvmftestfini 00:20:41.054 04:17:55 -- nvmf/common.sh@476 -- # nvmfcleanup 00:20:41.054 04:17:55 -- nvmf/common.sh@116 -- # sync 00:20:41.054 04:17:55 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:20:41.054 04:17:55 -- nvmf/common.sh@119 -- # set +e 00:20:41.054 04:17:55 -- nvmf/common.sh@120 -- # for i in {1..20} 00:20:41.054 04:17:55 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:20:41.054 rmmod nvme_tcp 00:20:41.054 rmmod nvme_fabrics 00:20:41.054 rmmod nvme_keyring 00:20:41.054 04:17:55 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:20:41.054 04:17:55 -- nvmf/common.sh@123 -- # set -e 00:20:41.054 04:17:55 -- nvmf/common.sh@124 -- # return 0 00:20:41.054 04:17:55 -- nvmf/common.sh@477 -- # '[' -n 4017721 ']' 00:20:41.054 04:17:55 -- nvmf/common.sh@478 -- # killprocess 4017721 00:20:41.054 04:17:55 -- common/autotest_common.sh@926 -- # '[' -z 4017721 ']' 00:20:41.054 04:17:55 -- common/autotest_common.sh@930 -- # kill -0 4017721 00:20:41.054 04:17:55 -- common/autotest_common.sh@931 -- # uname 00:20:41.054 04:17:55 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:20:41.054 04:17:55 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 4017721 00:20:41.054 04:17:55 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:20:41.054 04:17:55 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:20:41.054 04:17:55 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 4017721' 00:20:41.054 killing process with pid 4017721 00:20:41.054 04:17:55 -- common/autotest_common.sh@945 -- # kill 4017721 00:20:41.054 04:17:55 -- common/autotest_common.sh@950 -- # wait 4017721 00:20:41.313 04:17:55 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:20:41.313 04:17:55 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:20:41.313 04:17:55 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:20:41.313 04:17:55 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:41.313 04:17:55 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:20:41.313 04:17:55 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:41.313 04:17:55 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:41.313 04:17:55 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:43.853 04:17:57 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:20:43.853 00:20:43.853 real 0m10.300s 00:20:43.853 user 0m5.592s 00:20:43.853 sys 0m4.638s 00:20:43.853 04:17:57 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:43.853 04:17:57 -- common/autotest_common.sh@10 -- # set +x 00:20:43.853 ************************************ 00:20:43.853 END TEST nvmf_fused_ordering 00:20:43.853 ************************************ 00:20:43.853 04:17:57 -- nvmf/nvmf.sh@35 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:20:43.853 04:17:57 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:20:43.853 04:17:57 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:20:43.853 04:17:57 -- common/autotest_common.sh@10 -- # set +x 00:20:43.853 ************************************ 00:20:43.853 START TEST nvmf_delete_subsystem 00:20:43.853 ************************************ 00:20:43.853 04:17:57 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:20:43.853 * Looking for test storage... 00:20:43.853 * Found test storage at /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target 00:20:43.853 04:17:58 -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/common.sh 00:20:43.853 04:17:58 -- nvmf/common.sh@7 -- # uname -s 00:20:43.853 04:17:58 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:43.853 04:17:58 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:43.853 04:17:58 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:43.853 04:17:58 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:43.853 04:17:58 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:43.853 04:17:58 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:43.853 04:17:58 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:43.853 04:17:58 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:43.853 04:17:58 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:43.853 04:17:58 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:43.853 04:17:58 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80ef6226-405e-ee11-906e-a4bf01973fda 00:20:43.853 04:17:58 -- nvmf/common.sh@18 -- # NVME_HOSTID=80ef6226-405e-ee11-906e-a4bf01973fda 00:20:43.853 04:17:58 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:43.853 04:17:58 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:43.853 04:17:58 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:20:43.853 04:17:58 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/common.sh 00:20:43.853 04:17:58 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:43.853 04:17:58 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:43.853 04:17:58 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:43.853 04:17:58 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:43.854 04:17:58 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:43.854 04:17:58 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:43.854 04:17:58 -- paths/export.sh@5 -- # export PATH 00:20:43.854 04:17:58 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:43.854 04:17:58 -- nvmf/common.sh@46 -- # : 0 00:20:43.854 04:17:58 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:20:43.854 04:17:58 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:20:43.854 04:17:58 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:20:43.854 04:17:58 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:43.854 04:17:58 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:43.854 04:17:58 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:20:43.854 04:17:58 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:20:43.854 04:17:58 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:20:43.854 04:17:58 -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:20:43.854 04:17:58 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:20:43.854 04:17:58 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:43.854 04:17:58 -- nvmf/common.sh@436 -- # prepare_net_devs 00:20:43.854 04:17:58 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:20:43.854 04:17:58 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:20:43.854 04:17:58 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:43.854 04:17:58 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:43.854 04:17:58 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:43.854 04:17:58 -- nvmf/common.sh@402 -- # [[ phy-fallback != virt ]] 00:20:43.854 04:17:58 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:20:43.854 04:17:58 -- nvmf/common.sh@284 -- # xtrace_disable 00:20:43.854 04:17:58 -- common/autotest_common.sh@10 -- # set +x 00:20:49.136 04:18:03 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:20:49.137 04:18:03 -- nvmf/common.sh@290 -- # pci_devs=() 00:20:49.137 04:18:03 -- nvmf/common.sh@290 -- # local -a pci_devs 00:20:49.137 04:18:03 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:20:49.137 04:18:03 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:20:49.137 04:18:03 -- nvmf/common.sh@292 -- # pci_drivers=() 00:20:49.137 04:18:03 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:20:49.137 04:18:03 -- nvmf/common.sh@294 -- # net_devs=() 00:20:49.137 04:18:03 -- nvmf/common.sh@294 -- # local -ga net_devs 00:20:49.137 04:18:03 -- nvmf/common.sh@295 -- # e810=() 00:20:49.137 04:18:03 -- nvmf/common.sh@295 -- # local -ga e810 00:20:49.137 04:18:03 -- nvmf/common.sh@296 -- # x722=() 00:20:49.137 04:18:03 -- nvmf/common.sh@296 -- # local -ga x722 00:20:49.137 04:18:03 -- nvmf/common.sh@297 -- # mlx=() 00:20:49.137 04:18:03 -- nvmf/common.sh@297 -- # local -ga mlx 00:20:49.137 04:18:03 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:49.137 04:18:03 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:49.137 04:18:03 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:49.137 04:18:03 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:49.137 04:18:03 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:49.137 04:18:03 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:49.137 04:18:03 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:49.137 04:18:03 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:49.137 04:18:03 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:49.137 04:18:03 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:49.137 04:18:03 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:49.137 04:18:03 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:20:49.137 04:18:03 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:20:49.137 04:18:03 -- nvmf/common.sh@326 -- # [[ '' == mlx5 ]] 00:20:49.137 04:18:03 -- nvmf/common.sh@328 -- # [[ '' == e810 ]] 00:20:49.137 04:18:03 -- nvmf/common.sh@330 -- # [[ '' == x722 ]] 00:20:49.137 04:18:03 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:20:49.137 04:18:03 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:20:49.137 04:18:03 -- nvmf/common.sh@340 -- # echo 'Found 0000:27:00.0 (0x8086 - 0x159b)' 00:20:49.137 Found 0000:27:00.0 (0x8086 - 0x159b) 00:20:49.137 04:18:03 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:20:49.137 04:18:03 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:20:49.137 04:18:03 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:49.137 04:18:03 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:49.137 04:18:03 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:20:49.137 04:18:03 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:20:49.137 04:18:03 -- nvmf/common.sh@340 -- # echo 'Found 0000:27:00.1 (0x8086 - 0x159b)' 00:20:49.137 Found 0000:27:00.1 (0x8086 - 0x159b) 00:20:49.137 04:18:03 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:20:49.137 04:18:03 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:20:49.137 04:18:03 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:49.137 04:18:03 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:49.137 04:18:03 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:20:49.137 04:18:03 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:20:49.137 04:18:03 -- nvmf/common.sh@371 -- # [[ '' == e810 ]] 00:20:49.137 04:18:03 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:20:49.137 04:18:03 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:49.137 04:18:03 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:20:49.137 04:18:03 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:49.137 04:18:03 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:27:00.0: cvl_0_0' 00:20:49.137 Found net devices under 0000:27:00.0: cvl_0_0 00:20:49.137 04:18:03 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:20:49.137 04:18:03 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:20:49.137 04:18:03 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:49.137 04:18:03 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:20:49.137 04:18:03 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:49.137 04:18:03 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:27:00.1: cvl_0_1' 00:20:49.137 Found net devices under 0000:27:00.1: cvl_0_1 00:20:49.137 04:18:03 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:20:49.137 04:18:03 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:20:49.137 04:18:03 -- nvmf/common.sh@402 -- # is_hw=yes 00:20:49.137 04:18:03 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:20:49.137 04:18:03 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:20:49.137 04:18:03 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:20:49.137 04:18:03 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:49.137 04:18:03 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:49.137 04:18:03 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:49.137 04:18:03 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:20:49.137 04:18:03 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:49.137 04:18:03 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:49.137 04:18:03 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:20:49.137 04:18:03 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:49.137 04:18:03 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:49.137 04:18:03 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:20:49.137 04:18:03 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:20:49.137 04:18:03 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:20:49.137 04:18:03 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:49.137 04:18:03 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:49.137 04:18:03 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:49.137 04:18:03 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:20:49.137 04:18:03 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:49.137 04:18:03 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:49.137 04:18:03 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:49.137 04:18:03 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:20:49.137 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:49.137 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.772 ms 00:20:49.137 00:20:49.137 --- 10.0.0.2 ping statistics --- 00:20:49.137 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:49.137 rtt min/avg/max/mdev = 0.772/0.772/0.772/0.000 ms 00:20:49.137 04:18:03 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:49.137 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:49.137 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.385 ms 00:20:49.137 00:20:49.137 --- 10.0.0.1 ping statistics --- 00:20:49.137 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:49.137 rtt min/avg/max/mdev = 0.385/0.385/0.385/0.000 ms 00:20:49.137 04:18:03 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:49.137 04:18:03 -- nvmf/common.sh@410 -- # return 0 00:20:49.137 04:18:03 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:20:49.137 04:18:03 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:49.137 04:18:03 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:20:49.137 04:18:03 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:20:49.137 04:18:03 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:49.137 04:18:03 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:20:49.137 04:18:03 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:20:49.137 04:18:03 -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:20:49.137 04:18:03 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:20:49.137 04:18:03 -- common/autotest_common.sh@712 -- # xtrace_disable 00:20:49.137 04:18:03 -- common/autotest_common.sh@10 -- # set +x 00:20:49.137 04:18:03 -- nvmf/common.sh@469 -- # nvmfpid=4022340 00:20:49.137 04:18:03 -- nvmf/common.sh@470 -- # waitforlisten 4022340 00:20:49.137 04:18:03 -- common/autotest_common.sh@819 -- # '[' -z 4022340 ']' 00:20:49.137 04:18:03 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:20:49.137 04:18:03 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:49.137 04:18:03 -- common/autotest_common.sh@824 -- # local max_retries=100 00:20:49.137 04:18:03 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:49.137 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:49.137 04:18:03 -- common/autotest_common.sh@828 -- # xtrace_disable 00:20:49.137 04:18:03 -- common/autotest_common.sh@10 -- # set +x 00:20:49.137 [2024-05-14 04:18:03.596323] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:20:49.137 [2024-05-14 04:18:03.596451] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:49.137 EAL: No free 2048 kB hugepages reported on node 1 00:20:49.399 [2024-05-14 04:18:03.742763] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:20:49.399 [2024-05-14 04:18:03.846884] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:20:49.399 [2024-05-14 04:18:03.847063] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:49.399 [2024-05-14 04:18:03.847077] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:49.399 [2024-05-14 04:18:03.847087] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:49.399 [2024-05-14 04:18:03.847149] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:49.399 [2024-05-14 04:18:03.847153] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:20:49.965 04:18:04 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:20:49.965 04:18:04 -- common/autotest_common.sh@852 -- # return 0 00:20:49.965 04:18:04 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:20:49.965 04:18:04 -- common/autotest_common.sh@718 -- # xtrace_disable 00:20:49.965 04:18:04 -- common/autotest_common.sh@10 -- # set +x 00:20:49.965 04:18:04 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:49.965 04:18:04 -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:20:49.965 04:18:04 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:49.965 04:18:04 -- common/autotest_common.sh@10 -- # set +x 00:20:49.965 [2024-05-14 04:18:04.344216] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:49.966 04:18:04 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:49.966 04:18:04 -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:20:49.966 04:18:04 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:49.966 04:18:04 -- common/autotest_common.sh@10 -- # set +x 00:20:49.966 04:18:04 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:49.966 04:18:04 -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:49.966 04:18:04 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:49.966 04:18:04 -- common/autotest_common.sh@10 -- # set +x 00:20:49.966 [2024-05-14 04:18:04.360372] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:49.966 04:18:04 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:49.966 04:18:04 -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:20:49.966 04:18:04 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:49.966 04:18:04 -- common/autotest_common.sh@10 -- # set +x 00:20:49.966 NULL1 00:20:49.966 04:18:04 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:49.966 04:18:04 -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:20:49.966 04:18:04 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:49.966 04:18:04 -- common/autotest_common.sh@10 -- # set +x 00:20:49.966 Delay0 00:20:49.966 04:18:04 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:49.966 04:18:04 -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:20:49.966 04:18:04 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:49.966 04:18:04 -- common/autotest_common.sh@10 -- # set +x 00:20:49.966 04:18:04 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:49.966 04:18:04 -- target/delete_subsystem.sh@28 -- # perf_pid=4022583 00:20:49.966 04:18:04 -- target/delete_subsystem.sh@30 -- # sleep 2 00:20:49.966 04:18:04 -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:20:49.966 EAL: No free 2048 kB hugepages reported on node 1 00:20:49.966 [2024-05-14 04:18:04.475201] subsystem.c:1304:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:20:51.872 04:18:06 -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:51.872 04:18:06 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:51.872 04:18:06 -- common/autotest_common.sh@10 -- # set +x 00:20:52.442 Read completed with error (sct=0, sc=8) 00:20:52.442 Read completed with error (sct=0, sc=8) 00:20:52.442 starting I/O failed: -6 00:20:52.442 Read completed with error (sct=0, sc=8) 00:20:52.442 Read completed with error (sct=0, sc=8) 00:20:52.442 Read completed with error (sct=0, sc=8) 00:20:52.442 Read completed with error (sct=0, sc=8) 00:20:52.442 starting I/O failed: -6 00:20:52.442 Read completed with error (sct=0, sc=8) 00:20:52.442 Read completed with error (sct=0, sc=8) 00:20:52.442 Read completed with error (sct=0, sc=8) 00:20:52.442 Read completed with error (sct=0, sc=8) 00:20:52.442 starting I/O failed: -6 00:20:52.442 Read completed with error (sct=0, sc=8) 00:20:52.442 Read completed with error (sct=0, sc=8) 00:20:52.442 Read completed with error (sct=0, sc=8) 00:20:52.442 Read completed with error (sct=0, sc=8) 00:20:52.442 starting I/O failed: -6 00:20:52.442 Read completed with error (sct=0, sc=8) 00:20:52.442 Read completed with error (sct=0, sc=8) 00:20:52.442 Write completed with error (sct=0, sc=8) 00:20:52.442 Read completed with error (sct=0, sc=8) 00:20:52.442 starting I/O failed: -6 00:20:52.442 Write completed with error (sct=0, sc=8) 00:20:52.442 Read completed with error (sct=0, sc=8) 00:20:52.442 Read completed with error (sct=0, sc=8) 00:20:52.442 Write completed with error (sct=0, sc=8) 00:20:52.442 starting I/O failed: -6 00:20:52.442 Read completed with error (sct=0, sc=8) 00:20:52.442 Read completed with error (sct=0, sc=8) 00:20:52.442 Read completed with error (sct=0, sc=8) 00:20:52.442 Read completed with error (sct=0, sc=8) 00:20:52.442 starting I/O failed: -6 00:20:52.442 Read completed with error (sct=0, sc=8) 00:20:52.442 Read completed with error (sct=0, sc=8) 00:20:52.442 Read completed with error (sct=0, sc=8) 00:20:52.442 Read completed with error (sct=0, sc=8) 00:20:52.442 starting I/O failed: -6 00:20:52.442 Write completed with error (sct=0, sc=8) 00:20:52.442 Read completed with error (sct=0, sc=8) 00:20:52.442 Write completed with error (sct=0, sc=8) 00:20:52.442 Write completed with error (sct=0, sc=8) 00:20:52.442 starting I/O failed: -6 00:20:52.442 Read completed with error (sct=0, sc=8) 00:20:52.442 Read completed with error (sct=0, sc=8) 00:20:52.442 Read completed with error (sct=0, sc=8) 00:20:52.442 Read completed with error (sct=0, sc=8) 00:20:52.442 starting I/O failed: -6 00:20:52.442 Read completed with error (sct=0, sc=8) 00:20:52.442 Read completed with error (sct=0, sc=8) 00:20:52.442 Read completed with error (sct=0, sc=8) 00:20:52.442 Write completed with error (sct=0, sc=8) 00:20:52.442 starting I/O failed: -6 00:20:52.442 Write completed with error (sct=0, sc=8) 00:20:52.442 Write completed with error (sct=0, sc=8) 00:20:52.442 Write completed with error (sct=0, sc=8) 00:20:52.442 starting I/O failed: -6 00:20:52.442 starting I/O failed: -6 00:20:52.442 starting I/O failed: -6 00:20:52.442 starting I/O failed: -6 00:20:52.442 starting I/O failed: -6 00:20:52.442 starting I/O failed: -6 00:20:52.442 starting I/O failed: -6 00:20:52.442 Write completed with error (sct=0, sc=8) 00:20:52.442 Read completed with error (sct=0, sc=8) 00:20:52.442 Read completed with error (sct=0, sc=8) 00:20:52.442 starting I/O failed: -6 00:20:52.442 Write completed with error (sct=0, sc=8) 00:20:52.442 Write completed with error (sct=0, sc=8) 00:20:52.442 Read completed with error (sct=0, sc=8) 00:20:52.442 Read completed with error (sct=0, sc=8) 00:20:52.442 starting I/O failed: -6 00:20:52.442 Read completed with error (sct=0, sc=8) 00:20:52.442 Write completed with error (sct=0, sc=8) 00:20:52.442 Write completed with error (sct=0, sc=8) 00:20:52.442 Read completed with error (sct=0, sc=8) 00:20:52.442 starting I/O failed: -6 00:20:52.442 Read completed with error (sct=0, sc=8) 00:20:52.442 Read completed with error (sct=0, sc=8) 00:20:52.442 Read completed with error (sct=0, sc=8) 00:20:52.442 Read completed with error (sct=0, sc=8) 00:20:52.442 starting I/O failed: -6 00:20:52.442 Read completed with error (sct=0, sc=8) 00:20:52.442 Read completed with error (sct=0, sc=8) 00:20:52.442 Write completed with error (sct=0, sc=8) 00:20:52.442 Read completed with error (sct=0, sc=8) 00:20:52.442 starting I/O failed: -6 00:20:52.442 Read completed with error (sct=0, sc=8) 00:20:52.442 Read completed with error (sct=0, sc=8) 00:20:52.442 Read completed with error (sct=0, sc=8) 00:20:52.442 Read completed with error (sct=0, sc=8) 00:20:52.442 starting I/O failed: -6 00:20:52.442 Read completed with error (sct=0, sc=8) 00:20:52.442 Read completed with error (sct=0, sc=8) 00:20:52.442 Write completed with error (sct=0, sc=8) 00:20:52.442 Read completed with error (sct=0, sc=8) 00:20:52.442 starting I/O failed: -6 00:20:52.442 Write completed with error (sct=0, sc=8) 00:20:52.442 Read completed with error (sct=0, sc=8) 00:20:52.442 Read completed with error (sct=0, sc=8) 00:20:52.442 Read completed with error (sct=0, sc=8) 00:20:52.442 starting I/O failed: -6 00:20:52.442 Read completed with error (sct=0, sc=8) 00:20:52.442 Write completed with error (sct=0, sc=8) 00:20:52.442 Read completed with error (sct=0, sc=8) 00:20:52.442 Read completed with error (sct=0, sc=8) 00:20:52.442 starting I/O failed: -6 00:20:52.442 Write completed with error (sct=0, sc=8) 00:20:52.442 Read completed with error (sct=0, sc=8) 00:20:52.442 Read completed with error (sct=0, sc=8) 00:20:52.442 Read completed with error (sct=0, sc=8) 00:20:52.442 starting I/O failed: -6 00:20:52.442 [2024-05-14 04:18:06.790016] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x613000010340 is same with the state(5) to be set 00:20:52.442 Read completed with error (sct=0, sc=8) 00:20:52.442 Read completed with error (sct=0, sc=8) 00:20:52.442 Read completed with error (sct=0, sc=8) 00:20:52.442 Read completed with error (sct=0, sc=8) 00:20:52.442 Read completed with error (sct=0, sc=8) 00:20:52.442 Write completed with error (sct=0, sc=8) 00:20:52.442 Read completed with error (sct=0, sc=8) 00:20:52.442 Read completed with error (sct=0, sc=8) 00:20:52.442 Read completed with error (sct=0, sc=8) 00:20:52.442 Write completed with error (sct=0, sc=8) 00:20:52.442 Write completed with error (sct=0, sc=8) 00:20:52.442 Write completed with error (sct=0, sc=8) 00:20:52.442 Read completed with error (sct=0, sc=8) 00:20:52.442 Read completed with error (sct=0, sc=8) 00:20:52.442 Write completed with error (sct=0, sc=8) 00:20:52.442 Write completed with error (sct=0, sc=8) 00:20:52.442 Read completed with error (sct=0, sc=8) 00:20:52.442 Read completed with error (sct=0, sc=8) 00:20:52.442 Read completed with error (sct=0, sc=8) 00:20:52.442 Write completed with error (sct=0, sc=8) 00:20:52.442 Write completed with error (sct=0, sc=8) 00:20:52.442 Read completed with error (sct=0, sc=8) 00:20:52.442 Read completed with error (sct=0, sc=8) 00:20:52.442 Write completed with error (sct=0, sc=8) 00:20:52.442 Read completed with error (sct=0, sc=8) 00:20:52.442 Read completed with error (sct=0, sc=8) 00:20:52.442 Read completed with error (sct=0, sc=8) 00:20:52.442 Read completed with error (sct=0, sc=8) 00:20:52.442 Write completed with error (sct=0, sc=8) 00:20:52.442 Write completed with error (sct=0, sc=8) 00:20:52.442 Read completed with error (sct=0, sc=8) 00:20:52.442 Read completed with error (sct=0, sc=8) 00:20:52.442 Write completed with error (sct=0, sc=8) 00:20:52.442 Read completed with error (sct=0, sc=8) 00:20:52.442 Read completed with error (sct=0, sc=8) 00:20:52.442 Write completed with error (sct=0, sc=8) 00:20:52.442 Read completed with error (sct=0, sc=8) 00:20:52.442 Write completed with error (sct=0, sc=8) 00:20:52.442 Read completed with error (sct=0, sc=8) 00:20:52.442 Read completed with error (sct=0, sc=8) 00:20:52.442 Write completed with error (sct=0, sc=8) 00:20:52.442 Read completed with error (sct=0, sc=8) 00:20:52.442 Read completed with error (sct=0, sc=8) 00:20:52.442 Write completed with error (sct=0, sc=8) 00:20:52.442 Write completed with error (sct=0, sc=8) 00:20:52.442 Read completed with error (sct=0, sc=8) 00:20:52.442 Read completed with error (sct=0, sc=8) 00:20:52.442 Read completed with error (sct=0, sc=8) 00:20:52.442 Read completed with error (sct=0, sc=8) 00:20:52.442 Read completed with error (sct=0, sc=8) 00:20:52.442 Write completed with error (sct=0, sc=8) 00:20:52.442 Read completed with error (sct=0, sc=8) 00:20:52.442 Write completed with error (sct=0, sc=8) 00:20:52.442 Read completed with error (sct=0, sc=8) 00:20:52.442 Read completed with error (sct=0, sc=8) 00:20:52.442 Read completed with error (sct=0, sc=8) 00:20:52.442 Read completed with error (sct=0, sc=8) 00:20:52.442 Read completed with error (sct=0, sc=8) 00:20:52.442 Read completed with error (sct=0, sc=8) 00:20:52.442 Read completed with error (sct=0, sc=8) 00:20:52.442 Write completed with error (sct=0, sc=8) 00:20:52.442 Write completed with error (sct=0, sc=8) 00:20:52.442 Read completed with error (sct=0, sc=8) 00:20:52.442 Read completed with error (sct=0, sc=8) 00:20:52.442 Read completed with error (sct=0, sc=8) 00:20:52.442 Write completed with error (sct=0, sc=8) 00:20:52.442 Read completed with error (sct=0, sc=8) 00:20:52.442 Read completed with error (sct=0, sc=8) 00:20:52.442 Read completed with error (sct=0, sc=8) 00:20:52.442 Read completed with error (sct=0, sc=8) 00:20:52.442 Write completed with error (sct=0, sc=8) 00:20:52.442 Write completed with error (sct=0, sc=8) 00:20:52.442 Read completed with error (sct=0, sc=8) 00:20:52.442 Write completed with error (sct=0, sc=8) 00:20:52.442 Read completed with error (sct=0, sc=8) 00:20:52.442 Write completed with error (sct=0, sc=8) 00:20:52.443 Write completed with error (sct=0, sc=8) 00:20:52.443 Read completed with error (sct=0, sc=8) 00:20:52.443 Read completed with error (sct=0, sc=8) 00:20:52.443 Read completed with error (sct=0, sc=8) 00:20:52.443 Write completed with error (sct=0, sc=8) 00:20:52.443 Read completed with error (sct=0, sc=8) 00:20:52.443 Read completed with error (sct=0, sc=8) 00:20:52.443 Read completed with error (sct=0, sc=8) 00:20:52.443 Read completed with error (sct=0, sc=8) 00:20:52.443 Write completed with error (sct=0, sc=8) 00:20:52.443 Read completed with error (sct=0, sc=8) 00:20:52.443 Read completed with error (sct=0, sc=8) 00:20:52.443 Read completed with error (sct=0, sc=8) 00:20:52.443 Read completed with error (sct=0, sc=8) 00:20:52.443 Write completed with error (sct=0, sc=8) 00:20:52.443 Read completed with error (sct=0, sc=8) 00:20:52.443 Read completed with error (sct=0, sc=8) 00:20:52.443 Read completed with error (sct=0, sc=8) 00:20:52.443 Read completed with error (sct=0, sc=8) 00:20:52.443 Write completed with error (sct=0, sc=8) 00:20:52.443 Read completed with error (sct=0, sc=8) 00:20:52.443 [2024-05-14 04:18:06.790879] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61300000ffc0 is same with the state(5) to be set 00:20:53.383 [2024-05-14 04:18:07.742356] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x613000002180 is same with the state(5) to be set 00:20:53.383 Write completed with error (sct=0, sc=8) 00:20:53.383 Write completed with error (sct=0, sc=8) 00:20:53.383 Read completed with error (sct=0, sc=8) 00:20:53.383 Read completed with error (sct=0, sc=8) 00:20:53.383 Write completed with error (sct=0, sc=8) 00:20:53.383 Read completed with error (sct=0, sc=8) 00:20:53.383 Write completed with error (sct=0, sc=8) 00:20:53.383 Write completed with error (sct=0, sc=8) 00:20:53.383 Read completed with error (sct=0, sc=8) 00:20:53.383 Write completed with error (sct=0, sc=8) 00:20:53.383 Read completed with error (sct=0, sc=8) 00:20:53.383 Read completed with error (sct=0, sc=8) 00:20:53.383 Read completed with error (sct=0, sc=8) 00:20:53.383 Read completed with error (sct=0, sc=8) 00:20:53.383 Read completed with error (sct=0, sc=8) 00:20:53.383 Read completed with error (sct=0, sc=8) 00:20:53.383 [2024-05-14 04:18:07.791558] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6130000106c0 is same with the state(5) to be set 00:20:53.383 Write completed with error (sct=0, sc=8) 00:20:53.383 Read completed with error (sct=0, sc=8) 00:20:53.383 Read completed with error (sct=0, sc=8) 00:20:53.383 Read completed with error (sct=0, sc=8) 00:20:53.383 Read completed with error (sct=0, sc=8) 00:20:53.383 Write completed with error (sct=0, sc=8) 00:20:53.383 Write completed with error (sct=0, sc=8) 00:20:53.383 Read completed with error (sct=0, sc=8) 00:20:53.383 Read completed with error (sct=0, sc=8) 00:20:53.383 Write completed with error (sct=0, sc=8) 00:20:53.383 Read completed with error (sct=0, sc=8) 00:20:53.383 Write completed with error (sct=0, sc=8) 00:20:53.383 Read completed with error (sct=0, sc=8) 00:20:53.383 Write completed with error (sct=0, sc=8) 00:20:53.383 Read completed with error (sct=0, sc=8) 00:20:53.383 Write completed with error (sct=0, sc=8) 00:20:53.383 Read completed with error (sct=0, sc=8) 00:20:53.383 Read completed with error (sct=0, sc=8) 00:20:53.383 Write completed with error (sct=0, sc=8) 00:20:53.383 Read completed with error (sct=0, sc=8) 00:20:53.383 Read completed with error (sct=0, sc=8) 00:20:53.383 Read completed with error (sct=0, sc=8) 00:20:53.383 Write completed with error (sct=0, sc=8) 00:20:53.383 Read completed with error (sct=0, sc=8) 00:20:53.383 Write completed with error (sct=0, sc=8) 00:20:53.383 Read completed with error (sct=0, sc=8) 00:20:53.383 Read completed with error (sct=0, sc=8) 00:20:53.383 Write completed with error (sct=0, sc=8) 00:20:53.383 Write completed with error (sct=0, sc=8) 00:20:53.383 Read completed with error (sct=0, sc=8) 00:20:53.383 Read completed with error (sct=0, sc=8) 00:20:53.383 [2024-05-14 04:18:07.792196] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x613000002dc0 is same with the state(5) to be set 00:20:53.383 Read completed with error (sct=0, sc=8) 00:20:53.383 Write completed with error (sct=0, sc=8) 00:20:53.383 Write completed with error (sct=0, sc=8) 00:20:53.383 Write completed with error (sct=0, sc=8) 00:20:53.383 Read completed with error (sct=0, sc=8) 00:20:53.383 Read completed with error (sct=0, sc=8) 00:20:53.383 Write completed with error (sct=0, sc=8) 00:20:53.383 Read completed with error (sct=0, sc=8) 00:20:53.383 Write completed with error (sct=0, sc=8) 00:20:53.383 Read completed with error (sct=0, sc=8) 00:20:53.383 Write completed with error (sct=0, sc=8) 00:20:53.383 Write completed with error (sct=0, sc=8) 00:20:53.383 Read completed with error (sct=0, sc=8) 00:20:53.383 Write completed with error (sct=0, sc=8) 00:20:53.383 Read completed with error (sct=0, sc=8) 00:20:53.383 Read completed with error (sct=0, sc=8) 00:20:53.383 Write completed with error (sct=0, sc=8) 00:20:53.383 Read completed with error (sct=0, sc=8) 00:20:53.383 Read completed with error (sct=0, sc=8) 00:20:53.383 Write completed with error (sct=0, sc=8) 00:20:53.383 Read completed with error (sct=0, sc=8) 00:20:53.383 Read completed with error (sct=0, sc=8) 00:20:53.383 Read completed with error (sct=0, sc=8) 00:20:53.383 Read completed with error (sct=0, sc=8) 00:20:53.383 Read completed with error (sct=0, sc=8) 00:20:53.383 Read completed with error (sct=0, sc=8) 00:20:53.383 Read completed with error (sct=0, sc=8) 00:20:53.383 Write completed with error (sct=0, sc=8) 00:20:53.383 Read completed with error (sct=0, sc=8) 00:20:53.383 Read completed with error (sct=0, sc=8) 00:20:53.383 Write completed with error (sct=0, sc=8) 00:20:53.383 Read completed with error (sct=0, sc=8) 00:20:53.383 [2024-05-14 04:18:07.792434] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x613000002340 is same with the state(5) to be set 00:20:53.383 Write completed with error (sct=0, sc=8) 00:20:53.383 Read completed with error (sct=0, sc=8) 00:20:53.383 Read completed with error (sct=0, sc=8) 00:20:53.383 Read completed with error (sct=0, sc=8) 00:20:53.383 Read completed with error (sct=0, sc=8) 00:20:53.383 Write completed with error (sct=0, sc=8) 00:20:53.383 Write completed with error (sct=0, sc=8) 00:20:53.383 Read completed with error (sct=0, sc=8) 00:20:53.383 Read completed with error (sct=0, sc=8) 00:20:53.383 Read completed with error (sct=0, sc=8) 00:20:53.384 Read completed with error (sct=0, sc=8) 00:20:53.384 Write completed with error (sct=0, sc=8) 00:20:53.384 Read completed with error (sct=0, sc=8) 00:20:53.384 Read completed with error (sct=0, sc=8) 00:20:53.384 Read completed with error (sct=0, sc=8) 00:20:53.384 Read completed with error (sct=0, sc=8) 00:20:53.384 Write completed with error (sct=0, sc=8) 00:20:53.384 Read completed with error (sct=0, sc=8) 00:20:53.384 Write completed with error (sct=0, sc=8) 00:20:53.384 Read completed with error (sct=0, sc=8) 00:20:53.384 Read completed with error (sct=0, sc=8) 00:20:53.384 Read completed with error (sct=0, sc=8) 00:20:53.384 Read completed with error (sct=0, sc=8) 00:20:53.384 Read completed with error (sct=0, sc=8) 00:20:53.384 Write completed with error (sct=0, sc=8) 00:20:53.384 Read completed with error (sct=0, sc=8) 00:20:53.384 Write completed with error (sct=0, sc=8) 00:20:53.384 Read completed with error (sct=0, sc=8) 00:20:53.384 Read completed with error (sct=0, sc=8) 00:20:53.384 Write completed with error (sct=0, sc=8) 00:20:53.384 Read completed with error (sct=0, sc=8) 00:20:53.384 Read completed with error (sct=0, sc=8) 00:20:53.384 [2024-05-14 04:18:07.792650] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6130000026c0 is same with the state(5) to be set 00:20:53.384 04:18:07 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:53.384 04:18:07 -- target/delete_subsystem.sh@34 -- # delay=0 00:20:53.384 04:18:07 -- target/delete_subsystem.sh@35 -- # kill -0 4022583 00:20:53.384 [2024-05-14 04:18:07.794909] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x613000002180 (9): Bad file descriptor 00:20:53.384 04:18:07 -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:20:53.384 /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:20:53.384 Initializing NVMe Controllers 00:20:53.384 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:53.384 Controller IO queue size 128, less than required. 00:20:53.384 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:20:53.384 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:20:53.384 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:20:53.384 Initialization complete. Launching workers. 00:20:53.384 ======================================================== 00:20:53.384 Latency(us) 00:20:53.384 Device Information : IOPS MiB/s Average min max 00:20:53.384 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 180.79 0.09 969838.91 524.13 1014184.10 00:20:53.384 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 154.97 0.08 877831.21 516.77 1012922.83 00:20:53.384 ======================================================== 00:20:53.384 Total : 335.76 0.16 927373.82 516.77 1014184.10 00:20:53.384 00:20:53.952 04:18:08 -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:20:53.952 04:18:08 -- target/delete_subsystem.sh@35 -- # kill -0 4022583 00:20:53.952 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (4022583) - No such process 00:20:53.952 04:18:08 -- target/delete_subsystem.sh@45 -- # NOT wait 4022583 00:20:53.952 04:18:08 -- common/autotest_common.sh@640 -- # local es=0 00:20:53.952 04:18:08 -- common/autotest_common.sh@642 -- # valid_exec_arg wait 4022583 00:20:53.952 04:18:08 -- common/autotest_common.sh@628 -- # local arg=wait 00:20:53.952 04:18:08 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:20:53.952 04:18:08 -- common/autotest_common.sh@632 -- # type -t wait 00:20:53.952 04:18:08 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:20:53.952 04:18:08 -- common/autotest_common.sh@643 -- # wait 4022583 00:20:53.952 04:18:08 -- common/autotest_common.sh@643 -- # es=1 00:20:53.952 04:18:08 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:20:53.952 04:18:08 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:20:53.952 04:18:08 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:20:53.952 04:18:08 -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:20:53.952 04:18:08 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:53.952 04:18:08 -- common/autotest_common.sh@10 -- # set +x 00:20:53.952 04:18:08 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:53.952 04:18:08 -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:53.952 04:18:08 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:53.952 04:18:08 -- common/autotest_common.sh@10 -- # set +x 00:20:53.952 [2024-05-14 04:18:08.317853] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:53.952 04:18:08 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:53.952 04:18:08 -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:20:53.952 04:18:08 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:53.952 04:18:08 -- common/autotest_common.sh@10 -- # set +x 00:20:53.952 04:18:08 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:53.952 04:18:08 -- target/delete_subsystem.sh@54 -- # perf_pid=4023702 00:20:53.952 04:18:08 -- target/delete_subsystem.sh@56 -- # delay=0 00:20:53.952 04:18:08 -- target/delete_subsystem.sh@57 -- # kill -0 4023702 00:20:53.952 04:18:08 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:20:53.952 04:18:08 -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:20:53.952 EAL: No free 2048 kB hugepages reported on node 1 00:20:53.952 [2024-05-14 04:18:08.410189] subsystem.c:1304:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:20:54.522 04:18:08 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:20:54.522 04:18:08 -- target/delete_subsystem.sh@57 -- # kill -0 4023702 00:20:54.522 04:18:08 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:20:54.783 04:18:09 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:20:54.783 04:18:09 -- target/delete_subsystem.sh@57 -- # kill -0 4023702 00:20:54.783 04:18:09 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:20:55.428 04:18:09 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:20:55.428 04:18:09 -- target/delete_subsystem.sh@57 -- # kill -0 4023702 00:20:55.428 04:18:09 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:20:55.998 04:18:10 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:20:55.998 04:18:10 -- target/delete_subsystem.sh@57 -- # kill -0 4023702 00:20:55.998 04:18:10 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:20:56.567 04:18:10 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:20:56.567 04:18:10 -- target/delete_subsystem.sh@57 -- # kill -0 4023702 00:20:56.567 04:18:10 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:20:56.826 04:18:11 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:20:56.826 04:18:11 -- target/delete_subsystem.sh@57 -- # kill -0 4023702 00:20:56.826 04:18:11 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:20:57.392 Initializing NVMe Controllers 00:20:57.392 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:57.392 Controller IO queue size 128, less than required. 00:20:57.392 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:20:57.392 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:20:57.392 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:20:57.392 Initialization complete. Launching workers. 00:20:57.392 ======================================================== 00:20:57.392 Latency(us) 00:20:57.392 Device Information : IOPS MiB/s Average min max 00:20:57.392 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1002800.70 1000142.38 1041172.51 00:20:57.392 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1004076.99 1000234.41 1011467.41 00:20:57.392 ======================================================== 00:20:57.392 Total : 256.00 0.12 1003438.84 1000142.38 1041172.51 00:20:57.392 00:20:57.392 04:18:11 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:20:57.392 04:18:11 -- target/delete_subsystem.sh@57 -- # kill -0 4023702 00:20:57.392 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (4023702) - No such process 00:20:57.392 04:18:11 -- target/delete_subsystem.sh@67 -- # wait 4023702 00:20:57.392 04:18:11 -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:20:57.392 04:18:11 -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:20:57.392 04:18:11 -- nvmf/common.sh@476 -- # nvmfcleanup 00:20:57.392 04:18:11 -- nvmf/common.sh@116 -- # sync 00:20:57.392 04:18:11 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:20:57.392 04:18:11 -- nvmf/common.sh@119 -- # set +e 00:20:57.392 04:18:11 -- nvmf/common.sh@120 -- # for i in {1..20} 00:20:57.392 04:18:11 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:20:57.392 rmmod nvme_tcp 00:20:57.392 rmmod nvme_fabrics 00:20:57.392 rmmod nvme_keyring 00:20:57.392 04:18:11 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:20:57.392 04:18:11 -- nvmf/common.sh@123 -- # set -e 00:20:57.392 04:18:11 -- nvmf/common.sh@124 -- # return 0 00:20:57.392 04:18:11 -- nvmf/common.sh@477 -- # '[' -n 4022340 ']' 00:20:57.392 04:18:11 -- nvmf/common.sh@478 -- # killprocess 4022340 00:20:57.392 04:18:11 -- common/autotest_common.sh@926 -- # '[' -z 4022340 ']' 00:20:57.392 04:18:11 -- common/autotest_common.sh@930 -- # kill -0 4022340 00:20:57.392 04:18:11 -- common/autotest_common.sh@931 -- # uname 00:20:57.392 04:18:11 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:20:57.392 04:18:11 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 4022340 00:20:57.651 04:18:11 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:20:57.651 04:18:11 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:20:57.651 04:18:11 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 4022340' 00:20:57.651 killing process with pid 4022340 00:20:57.651 04:18:11 -- common/autotest_common.sh@945 -- # kill 4022340 00:20:57.651 04:18:11 -- common/autotest_common.sh@950 -- # wait 4022340 00:20:57.911 04:18:12 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:20:57.911 04:18:12 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:20:57.911 04:18:12 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:20:57.911 04:18:12 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:57.911 04:18:12 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:20:57.911 04:18:12 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:57.911 04:18:12 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:57.911 04:18:12 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:00.447 04:18:14 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:21:00.447 00:21:00.447 real 0m16.498s 00:21:00.447 user 0m31.231s 00:21:00.447 sys 0m4.786s 00:21:00.447 04:18:14 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:00.447 04:18:14 -- common/autotest_common.sh@10 -- # set +x 00:21:00.447 ************************************ 00:21:00.447 END TEST nvmf_delete_subsystem 00:21:00.447 ************************************ 00:21:00.447 04:18:14 -- nvmf/nvmf.sh@36 -- # [[ 0 -eq 1 ]] 00:21:00.447 04:18:14 -- nvmf/nvmf.sh@39 -- # [[ 0 -eq 1 ]] 00:21:00.447 04:18:14 -- nvmf/nvmf.sh@46 -- # run_test nvmf_host_management /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:21:00.447 04:18:14 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:21:00.447 04:18:14 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:21:00.447 04:18:14 -- common/autotest_common.sh@10 -- # set +x 00:21:00.447 ************************************ 00:21:00.447 START TEST nvmf_host_management 00:21:00.447 ************************************ 00:21:00.447 04:18:14 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:21:00.447 * Looking for test storage... 00:21:00.447 * Found test storage at /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target 00:21:00.447 04:18:14 -- target/host_management.sh@9 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/common.sh 00:21:00.447 04:18:14 -- nvmf/common.sh@7 -- # uname -s 00:21:00.447 04:18:14 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:00.447 04:18:14 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:00.447 04:18:14 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:00.447 04:18:14 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:00.447 04:18:14 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:00.447 04:18:14 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:00.447 04:18:14 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:00.447 04:18:14 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:00.447 04:18:14 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:00.447 04:18:14 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:00.447 04:18:14 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80ef6226-405e-ee11-906e-a4bf01973fda 00:21:00.447 04:18:14 -- nvmf/common.sh@18 -- # NVME_HOSTID=80ef6226-405e-ee11-906e-a4bf01973fda 00:21:00.447 04:18:14 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:00.447 04:18:14 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:00.447 04:18:14 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:21:00.447 04:18:14 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/common.sh 00:21:00.447 04:18:14 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:00.447 04:18:14 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:00.447 04:18:14 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:00.447 04:18:14 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:00.447 04:18:14 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:00.447 04:18:14 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:00.447 04:18:14 -- paths/export.sh@5 -- # export PATH 00:21:00.447 04:18:14 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:00.447 04:18:14 -- nvmf/common.sh@46 -- # : 0 00:21:00.447 04:18:14 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:21:00.447 04:18:14 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:21:00.447 04:18:14 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:21:00.447 04:18:14 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:00.447 04:18:14 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:00.447 04:18:14 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:21:00.447 04:18:14 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:21:00.447 04:18:14 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:21:00.447 04:18:14 -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:21:00.447 04:18:14 -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:21:00.447 04:18:14 -- target/host_management.sh@104 -- # nvmftestinit 00:21:00.447 04:18:14 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:21:00.447 04:18:14 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:00.447 04:18:14 -- nvmf/common.sh@436 -- # prepare_net_devs 00:21:00.447 04:18:14 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:21:00.447 04:18:14 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:21:00.447 04:18:14 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:00.447 04:18:14 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:00.447 04:18:14 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:00.447 04:18:14 -- nvmf/common.sh@402 -- # [[ phy-fallback != virt ]] 00:21:00.447 04:18:14 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:21:00.447 04:18:14 -- nvmf/common.sh@284 -- # xtrace_disable 00:21:00.447 04:18:14 -- common/autotest_common.sh@10 -- # set +x 00:21:05.724 04:18:19 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:21:05.724 04:18:19 -- nvmf/common.sh@290 -- # pci_devs=() 00:21:05.724 04:18:19 -- nvmf/common.sh@290 -- # local -a pci_devs 00:21:05.724 04:18:19 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:21:05.724 04:18:19 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:21:05.724 04:18:19 -- nvmf/common.sh@292 -- # pci_drivers=() 00:21:05.724 04:18:19 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:21:05.724 04:18:19 -- nvmf/common.sh@294 -- # net_devs=() 00:21:05.724 04:18:19 -- nvmf/common.sh@294 -- # local -ga net_devs 00:21:05.724 04:18:19 -- nvmf/common.sh@295 -- # e810=() 00:21:05.724 04:18:19 -- nvmf/common.sh@295 -- # local -ga e810 00:21:05.724 04:18:19 -- nvmf/common.sh@296 -- # x722=() 00:21:05.724 04:18:19 -- nvmf/common.sh@296 -- # local -ga x722 00:21:05.724 04:18:19 -- nvmf/common.sh@297 -- # mlx=() 00:21:05.724 04:18:19 -- nvmf/common.sh@297 -- # local -ga mlx 00:21:05.724 04:18:19 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:05.724 04:18:19 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:05.724 04:18:19 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:05.724 04:18:19 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:05.724 04:18:19 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:05.724 04:18:19 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:05.724 04:18:19 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:05.724 04:18:19 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:05.724 04:18:19 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:05.724 04:18:19 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:05.724 04:18:19 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:05.724 04:18:19 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:21:05.724 04:18:19 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:21:05.724 04:18:19 -- nvmf/common.sh@326 -- # [[ '' == mlx5 ]] 00:21:05.724 04:18:19 -- nvmf/common.sh@328 -- # [[ '' == e810 ]] 00:21:05.724 04:18:19 -- nvmf/common.sh@330 -- # [[ '' == x722 ]] 00:21:05.724 04:18:19 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:21:05.724 04:18:19 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:21:05.724 04:18:19 -- nvmf/common.sh@340 -- # echo 'Found 0000:27:00.0 (0x8086 - 0x159b)' 00:21:05.724 Found 0000:27:00.0 (0x8086 - 0x159b) 00:21:05.724 04:18:19 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:21:05.724 04:18:19 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:21:05.724 04:18:19 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:05.724 04:18:19 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:05.724 04:18:19 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:21:05.724 04:18:19 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:21:05.724 04:18:19 -- nvmf/common.sh@340 -- # echo 'Found 0000:27:00.1 (0x8086 - 0x159b)' 00:21:05.724 Found 0000:27:00.1 (0x8086 - 0x159b) 00:21:05.724 04:18:19 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:21:05.724 04:18:19 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:21:05.724 04:18:19 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:05.724 04:18:19 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:05.724 04:18:19 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:21:05.724 04:18:19 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:21:05.724 04:18:19 -- nvmf/common.sh@371 -- # [[ '' == e810 ]] 00:21:05.724 04:18:19 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:21:05.724 04:18:19 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:05.724 04:18:19 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:21:05.724 04:18:19 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:05.724 04:18:19 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:27:00.0: cvl_0_0' 00:21:05.724 Found net devices under 0000:27:00.0: cvl_0_0 00:21:05.724 04:18:19 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:21:05.724 04:18:19 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:21:05.724 04:18:19 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:05.724 04:18:19 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:21:05.724 04:18:19 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:05.724 04:18:19 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:27:00.1: cvl_0_1' 00:21:05.724 Found net devices under 0000:27:00.1: cvl_0_1 00:21:05.724 04:18:19 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:21:05.724 04:18:19 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:21:05.724 04:18:19 -- nvmf/common.sh@402 -- # is_hw=yes 00:21:05.724 04:18:19 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:21:05.724 04:18:19 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:21:05.724 04:18:19 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:21:05.724 04:18:19 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:05.724 04:18:19 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:05.724 04:18:19 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:05.724 04:18:19 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:21:05.724 04:18:19 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:05.724 04:18:19 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:05.724 04:18:19 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:21:05.724 04:18:19 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:05.724 04:18:19 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:05.724 04:18:19 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:21:05.724 04:18:19 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:21:05.724 04:18:19 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:21:05.724 04:18:19 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:05.724 04:18:20 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:05.724 04:18:20 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:05.724 04:18:20 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:21:05.724 04:18:20 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:05.724 04:18:20 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:05.724 04:18:20 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:05.724 04:18:20 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:21:05.724 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:05.724 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.234 ms 00:21:05.724 00:21:05.724 --- 10.0.0.2 ping statistics --- 00:21:05.724 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:05.724 rtt min/avg/max/mdev = 0.234/0.234/0.234/0.000 ms 00:21:05.724 04:18:20 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:05.724 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:05.724 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.115 ms 00:21:05.724 00:21:05.724 --- 10.0.0.1 ping statistics --- 00:21:05.724 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:05.724 rtt min/avg/max/mdev = 0.115/0.115/0.115/0.000 ms 00:21:05.724 04:18:20 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:05.724 04:18:20 -- nvmf/common.sh@410 -- # return 0 00:21:05.724 04:18:20 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:21:05.724 04:18:20 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:05.724 04:18:20 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:21:05.724 04:18:20 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:21:05.724 04:18:20 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:05.724 04:18:20 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:21:05.724 04:18:20 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:21:05.724 04:18:20 -- target/host_management.sh@106 -- # run_test nvmf_host_management nvmf_host_management 00:21:05.724 04:18:20 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:21:05.724 04:18:20 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:21:05.724 04:18:20 -- common/autotest_common.sh@10 -- # set +x 00:21:05.724 ************************************ 00:21:05.724 START TEST nvmf_host_management 00:21:05.724 ************************************ 00:21:05.724 04:18:20 -- common/autotest_common.sh@1104 -- # nvmf_host_management 00:21:05.724 04:18:20 -- target/host_management.sh@69 -- # starttarget 00:21:05.724 04:18:20 -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:21:05.724 04:18:20 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:21:05.724 04:18:20 -- common/autotest_common.sh@712 -- # xtrace_disable 00:21:05.724 04:18:20 -- common/autotest_common.sh@10 -- # set +x 00:21:05.724 04:18:20 -- nvmf/common.sh@469 -- # nvmfpid=4028420 00:21:05.724 04:18:20 -- nvmf/common.sh@470 -- # waitforlisten 4028420 00:21:05.724 04:18:20 -- common/autotest_common.sh@819 -- # '[' -z 4028420 ']' 00:21:05.724 04:18:20 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:05.724 04:18:20 -- common/autotest_common.sh@824 -- # local max_retries=100 00:21:05.724 04:18:20 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:05.724 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:05.724 04:18:20 -- common/autotest_common.sh@828 -- # xtrace_disable 00:21:05.724 04:18:20 -- common/autotest_common.sh@10 -- # set +x 00:21:05.724 04:18:20 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:21:05.724 [2024-05-14 04:18:20.274048] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:21:05.724 [2024-05-14 04:18:20.274181] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:05.984 EAL: No free 2048 kB hugepages reported on node 1 00:21:05.984 [2024-05-14 04:18:20.415757] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:05.984 [2024-05-14 04:18:20.510800] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:21:05.984 [2024-05-14 04:18:20.510995] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:05.984 [2024-05-14 04:18:20.511009] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:05.984 [2024-05-14 04:18:20.511019] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:05.984 [2024-05-14 04:18:20.511102] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:21:05.984 [2024-05-14 04:18:20.511133] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:21:05.984 [2024-05-14 04:18:20.511255] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:21:05.984 [2024-05-14 04:18:20.511284] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:21:06.553 04:18:20 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:21:06.553 04:18:20 -- common/autotest_common.sh@852 -- # return 0 00:21:06.553 04:18:20 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:21:06.553 04:18:20 -- common/autotest_common.sh@718 -- # xtrace_disable 00:21:06.553 04:18:20 -- common/autotest_common.sh@10 -- # set +x 00:21:06.553 04:18:21 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:06.553 04:18:21 -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:21:06.553 04:18:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:06.553 04:18:21 -- common/autotest_common.sh@10 -- # set +x 00:21:06.553 [2024-05-14 04:18:21.028743] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:06.553 04:18:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:06.553 04:18:21 -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:21:06.553 04:18:21 -- common/autotest_common.sh@712 -- # xtrace_disable 00:21:06.553 04:18:21 -- common/autotest_common.sh@10 -- # set +x 00:21:06.553 04:18:21 -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:21:06.553 04:18:21 -- target/host_management.sh@23 -- # cat 00:21:06.553 04:18:21 -- target/host_management.sh@30 -- # rpc_cmd 00:21:06.553 04:18:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:06.553 04:18:21 -- common/autotest_common.sh@10 -- # set +x 00:21:06.553 Malloc0 00:21:06.553 [2024-05-14 04:18:21.107487] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:06.553 04:18:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:06.553 04:18:21 -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:21:06.553 04:18:21 -- common/autotest_common.sh@718 -- # xtrace_disable 00:21:06.553 04:18:21 -- common/autotest_common.sh@10 -- # set +x 00:21:06.815 04:18:21 -- target/host_management.sh@73 -- # perfpid=4028739 00:21:06.815 04:18:21 -- target/host_management.sh@74 -- # waitforlisten 4028739 /var/tmp/bdevperf.sock 00:21:06.815 04:18:21 -- common/autotest_common.sh@819 -- # '[' -z 4028739 ']' 00:21:06.815 04:18:21 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:06.815 04:18:21 -- common/autotest_common.sh@824 -- # local max_retries=100 00:21:06.815 04:18:21 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:06.815 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:06.815 04:18:21 -- common/autotest_common.sh@828 -- # xtrace_disable 00:21:06.815 04:18:21 -- common/autotest_common.sh@10 -- # set +x 00:21:06.815 04:18:21 -- target/host_management.sh@72 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:21:06.815 04:18:21 -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:21:06.815 04:18:21 -- nvmf/common.sh@520 -- # config=() 00:21:06.815 04:18:21 -- nvmf/common.sh@520 -- # local subsystem config 00:21:06.815 04:18:21 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:21:06.815 04:18:21 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:21:06.815 { 00:21:06.815 "params": { 00:21:06.815 "name": "Nvme$subsystem", 00:21:06.815 "trtype": "$TEST_TRANSPORT", 00:21:06.816 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:06.816 "adrfam": "ipv4", 00:21:06.816 "trsvcid": "$NVMF_PORT", 00:21:06.816 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:06.816 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:06.816 "hdgst": ${hdgst:-false}, 00:21:06.816 "ddgst": ${ddgst:-false} 00:21:06.816 }, 00:21:06.816 "method": "bdev_nvme_attach_controller" 00:21:06.816 } 00:21:06.816 EOF 00:21:06.816 )") 00:21:06.816 04:18:21 -- nvmf/common.sh@542 -- # cat 00:21:06.816 04:18:21 -- nvmf/common.sh@544 -- # jq . 00:21:06.816 04:18:21 -- nvmf/common.sh@545 -- # IFS=, 00:21:06.816 04:18:21 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:21:06.816 "params": { 00:21:06.816 "name": "Nvme0", 00:21:06.816 "trtype": "tcp", 00:21:06.816 "traddr": "10.0.0.2", 00:21:06.816 "adrfam": "ipv4", 00:21:06.816 "trsvcid": "4420", 00:21:06.816 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:21:06.816 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:21:06.816 "hdgst": false, 00:21:06.816 "ddgst": false 00:21:06.816 }, 00:21:06.816 "method": "bdev_nvme_attach_controller" 00:21:06.816 }' 00:21:06.816 [2024-05-14 04:18:21.236304] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:21:06.816 [2024-05-14 04:18:21.236448] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4028739 ] 00:21:06.816 EAL: No free 2048 kB hugepages reported on node 1 00:21:06.816 [2024-05-14 04:18:21.366432] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:07.076 [2024-05-14 04:18:21.456511] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:07.336 Running I/O for 10 seconds... 00:21:07.600 04:18:21 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:21:07.600 04:18:21 -- common/autotest_common.sh@852 -- # return 0 00:21:07.600 04:18:21 -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:21:07.600 04:18:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:07.600 04:18:21 -- common/autotest_common.sh@10 -- # set +x 00:21:07.600 04:18:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:07.600 04:18:21 -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:21:07.600 04:18:21 -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:21:07.600 04:18:21 -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:21:07.600 04:18:21 -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:21:07.600 04:18:21 -- target/host_management.sh@52 -- # local ret=1 00:21:07.600 04:18:21 -- target/host_management.sh@53 -- # local i 00:21:07.600 04:18:21 -- target/host_management.sh@54 -- # (( i = 10 )) 00:21:07.600 04:18:21 -- target/host_management.sh@54 -- # (( i != 0 )) 00:21:07.600 04:18:21 -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:21:07.600 04:18:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:07.600 04:18:21 -- common/autotest_common.sh@10 -- # set +x 00:21:07.600 04:18:21 -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:21:07.600 04:18:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:07.600 04:18:21 -- target/host_management.sh@55 -- # read_io_count=699 00:21:07.600 04:18:21 -- target/host_management.sh@58 -- # '[' 699 -ge 100 ']' 00:21:07.600 04:18:21 -- target/host_management.sh@59 -- # ret=0 00:21:07.600 04:18:21 -- target/host_management.sh@60 -- # break 00:21:07.600 04:18:21 -- target/host_management.sh@64 -- # return 0 00:21:07.600 04:18:21 -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:21:07.600 04:18:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:07.600 04:18:21 -- common/autotest_common.sh@10 -- # set +x 00:21:07.600 [2024-05-14 04:18:21.996937] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002480 is same with the state(5) to be set 00:21:07.600 [2024-05-14 04:18:21.997024] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002480 is same with the state(5) to be set 00:21:07.600 [2024-05-14 04:18:21.997040] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002480 is same with the state(5) to be set 00:21:07.600 [2024-05-14 04:18:21.997054] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002480 is same with the state(5) to be set 00:21:07.600 [2024-05-14 04:18:21.997067] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002480 is same with the state(5) to be set 00:21:07.600 [2024-05-14 04:18:21.997082] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002480 is same with the state(5) to be set 00:21:07.600 [2024-05-14 04:18:21.997096] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002480 is same with the state(5) to be set 00:21:07.600 [2024-05-14 04:18:21.997109] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002480 is same with the state(5) to be set 00:21:07.600 [2024-05-14 04:18:21.997123] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002480 is same with the state(5) to be set 00:21:07.600 [2024-05-14 04:18:21.997135] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002480 is same with the state(5) to be set 00:21:07.600 [2024-05-14 04:18:21.997148] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002480 is same with the state(5) to be set 00:21:07.600 [2024-05-14 04:18:21.997161] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002480 is same with the state(5) to be set 00:21:07.600 [2024-05-14 04:18:21.997174] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002480 is same with the state(5) to be set 00:21:07.600 [2024-05-14 04:18:21.997193] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002480 is same with the state(5) to be set 00:21:07.600 [2024-05-14 04:18:21.997206] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002480 is same with the state(5) to be set 00:21:07.600 [2024-05-14 04:18:21.997219] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002480 is same with the state(5) to be set 00:21:07.601 [2024-05-14 04:18:21.997232] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002480 is same with the state(5) to be set 00:21:07.601 [2024-05-14 04:18:21.997246] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002480 is same with the state(5) to be set 00:21:07.601 [2024-05-14 04:18:21.997259] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002480 is same with the state(5) to be set 00:21:07.601 [2024-05-14 04:18:21.997271] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002480 is same with the state(5) to be set 00:21:07.601 [2024-05-14 04:18:21.997285] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002480 is same with the state(5) to be set 00:21:07.601 [2024-05-14 04:18:21.997297] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002480 is same with the state(5) to be set 00:21:07.601 [2024-05-14 04:18:21.997310] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002480 is same with the state(5) to be set 00:21:07.601 [2024-05-14 04:18:21.997323] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002480 is same with the state(5) to be set 00:21:07.601 [2024-05-14 04:18:21.997335] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002480 is same with the state(5) to be set 00:21:07.601 [2024-05-14 04:18:21.997348] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002480 is same with the state(5) to be set 00:21:07.601 [2024-05-14 04:18:21.997361] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002480 is same with the state(5) to be set 00:21:07.601 [2024-05-14 04:18:21.997381] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002480 is same with the state(5) to be set 00:21:07.601 [2024-05-14 04:18:21.997393] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002480 is same with the state(5) to be set 00:21:07.601 [2024-05-14 04:18:21.997406] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002480 is same with the state(5) to be set 00:21:07.601 [2024-05-14 04:18:21.997418] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002480 is same with the state(5) to be set 00:21:07.601 [2024-05-14 04:18:21.997430] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002480 is same with the state(5) to be set 00:21:07.601 [2024-05-14 04:18:21.997442] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002480 is same with the state(5) to be set 00:21:07.601 [2024-05-14 04:18:21.997455] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002480 is same with the state(5) to be set 00:21:07.601 [2024-05-14 04:18:21.997468] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002480 is same with the state(5) to be set 00:21:07.601 [2024-05-14 04:18:21.997480] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002480 is same with the state(5) to be set 00:21:07.601 [2024-05-14 04:18:21.997633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:107008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.601 [2024-05-14 04:18:21.997682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.601 [2024-05-14 04:18:21.997709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:107136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.601 [2024-05-14 04:18:21.997719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.601 [2024-05-14 04:18:21.997730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:107392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.601 [2024-05-14 04:18:21.997739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.601 [2024-05-14 04:18:21.997749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:107648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.601 [2024-05-14 04:18:21.997758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.601 [2024-05-14 04:18:21.997768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:107904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.601 [2024-05-14 04:18:21.997776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.601 [2024-05-14 04:18:21.997786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:108032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.601 [2024-05-14 04:18:21.997794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.601 [2024-05-14 04:18:21.997804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:108160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.601 [2024-05-14 04:18:21.997812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.601 [2024-05-14 04:18:21.997822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:102272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.601 [2024-05-14 04:18:21.997830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.601 [2024-05-14 04:18:21.997844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:108288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.601 [2024-05-14 04:18:21.997852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.601 [2024-05-14 04:18:21.997862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:108416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.601 [2024-05-14 04:18:21.997869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.601 [2024-05-14 04:18:21.997879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:108544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.601 [2024-05-14 04:18:21.997887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.601 [2024-05-14 04:18:21.997897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:102400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.601 [2024-05-14 04:18:21.997906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.601 [2024-05-14 04:18:21.997916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:108672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.601 [2024-05-14 04:18:21.997924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.601 [2024-05-14 04:18:21.997933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:108800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.601 [2024-05-14 04:18:21.997941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.601 [2024-05-14 04:18:21.997951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:108928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.601 [2024-05-14 04:18:21.997959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.601 [2024-05-14 04:18:21.997969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:109056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.601 [2024-05-14 04:18:21.997977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.601 [2024-05-14 04:18:21.997987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:109184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.601 [2024-05-14 04:18:21.997995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.601 [2024-05-14 04:18:21.998018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:109312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.601 [2024-05-14 04:18:21.998026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.601 [2024-05-14 04:18:21.998035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:102528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.601 [2024-05-14 04:18:21.998043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.601 [2024-05-14 04:18:21.998054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:109440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.601 [2024-05-14 04:18:21.998061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.601 [2024-05-14 04:18:21.998071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:109568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.601 [2024-05-14 04:18:21.998080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.601 [2024-05-14 04:18:21.998090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:109696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.601 [2024-05-14 04:18:21.998098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.601 [2024-05-14 04:18:21.998108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:109824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.601 [2024-05-14 04:18:21.998116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.601 [2024-05-14 04:18:21.998126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:109952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.601 [2024-05-14 04:18:21.998134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.601 [2024-05-14 04:18:21.998144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:102656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.601 [2024-05-14 04:18:21.998152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.601 [2024-05-14 04:18:21.998162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:110080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.601 [2024-05-14 04:18:21.998170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.601 [2024-05-14 04:18:21.998180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:110208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.601 [2024-05-14 04:18:21.998192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.601 [2024-05-14 04:18:21.998202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:102784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.601 [2024-05-14 04:18:21.998210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.601 [2024-05-14 04:18:21.998220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:110336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.601 [2024-05-14 04:18:21.998228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.601 [2024-05-14 04:18:21.998237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:110464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.601 [2024-05-14 04:18:21.998245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.601 [2024-05-14 04:18:21.998254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:102912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.601 [2024-05-14 04:18:21.998262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.602 [2024-05-14 04:18:21.998271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:103168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.602 [2024-05-14 04:18:21.998278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.602 [2024-05-14 04:18:21.998288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:103296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.602 [2024-05-14 04:18:21.998296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.602 [2024-05-14 04:18:21.998309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:103424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.602 [2024-05-14 04:18:21.998317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.602 [2024-05-14 04:18:21.998326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:103552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.602 [2024-05-14 04:18:21.998334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.602 [2024-05-14 04:18:21.998343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:103936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.602 [2024-05-14 04:18:21.998351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.602 [2024-05-14 04:18:21.998360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:104448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.602 [2024-05-14 04:18:21.998367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.602 [2024-05-14 04:18:21.998377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:110592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.602 [2024-05-14 04:18:21.998384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.602 [2024-05-14 04:18:21.998394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:110720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.602 [2024-05-14 04:18:21.998401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.602 [2024-05-14 04:18:21.998411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:110848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.602 [2024-05-14 04:18:21.998418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.602 [2024-05-14 04:18:21.998428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:110976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.602 [2024-05-14 04:18:21.998435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.602 [2024-05-14 04:18:21.998445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:104704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.602 [2024-05-14 04:18:21.998452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.602 [2024-05-14 04:18:21.998462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:104832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.602 [2024-05-14 04:18:21.998470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.602 [2024-05-14 04:18:21.998481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:104960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.602 [2024-05-14 04:18:21.998489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.602 [2024-05-14 04:18:21.998499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:105344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.602 [2024-05-14 04:18:21.998507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.602 [2024-05-14 04:18:21.998517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:111104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.602 [2024-05-14 04:18:21.998526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.602 [2024-05-14 04:18:21.998536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:111232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.602 [2024-05-14 04:18:21.998544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.602 [2024-05-14 04:18:21.998554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:111360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.602 [2024-05-14 04:18:21.998561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.602 [2024-05-14 04:18:21.998571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:111488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.602 [2024-05-14 04:18:21.998580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.602 [2024-05-14 04:18:21.998590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:111616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.602 [2024-05-14 04:18:21.998597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.602 [2024-05-14 04:18:21.998607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:105472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.602 [2024-05-14 04:18:21.998615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.602 [2024-05-14 04:18:21.998625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:105600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.602 [2024-05-14 04:18:21.998632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.602 [2024-05-14 04:18:21.998642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:105728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.602 [2024-05-14 04:18:21.998650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.602 [2024-05-14 04:18:21.998660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:105856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.602 [2024-05-14 04:18:21.998668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.602 [2024-05-14 04:18:21.998677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:106112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.602 [2024-05-14 04:18:21.998685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.602 [2024-05-14 04:18:21.998695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:111744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.602 [2024-05-14 04:18:21.998703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.602 [2024-05-14 04:18:21.998713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:111872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.602 [2024-05-14 04:18:21.998721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.602 [2024-05-14 04:18:21.998731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:112000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.602 [2024-05-14 04:18:21.998739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.602 [2024-05-14 04:18:21.998750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:112128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.602 [2024-05-14 04:18:21.998758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.602 [2024-05-14 04:18:21.998767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:112256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.602 [2024-05-14 04:18:21.998775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.602 [2024-05-14 04:18:21.998784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:112384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.602 [2024-05-14 04:18:21.998792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.602 [2024-05-14 04:18:21.998801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:112512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.602 [2024-05-14 04:18:21.998809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.602 [2024-05-14 04:18:21.998819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:112640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.602 [2024-05-14 04:18:21.998826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.602 [2024-05-14 04:18:21.998837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:106368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.602 [2024-05-14 04:18:21.998844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.602 [2024-05-14 04:18:21.998994] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x613000003d80 was disconnected and freed. reset controller. 00:21:07.602 [2024-05-14 04:18:21.999920] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:21:07.602 task offset: 107008 on job bdev=Nvme0n1 fails 00:21:07.602 00:21:07.602 Latency(us) 00:21:07.602 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:07.602 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:07.602 Job: Nvme0n1 ended in about 0.20 seconds with error 00:21:07.602 Verification LBA range: start 0x0 length 0x400 00:21:07.602 Nvme0n1 : 0.20 3988.58 249.29 312.45 0.00 14538.38 2569.70 20557.61 00:21:07.602 =================================================================================================================== 00:21:07.602 Total : 3988.58 249.29 312.45 0.00 14538.38 2569.70 20557.61 00:21:07.602 04:18:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:07.602 04:18:22 -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:21:07.602 04:18:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:07.602 04:18:22 -- common/autotest_common.sh@10 -- # set +x 00:21:07.602 [2024-05-14 04:18:22.002485] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:21:07.602 [2024-05-14 04:18:22.002528] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x613000003140 (9): Bad file descriptor 00:21:07.602 [2024-05-14 04:18:22.008291] ctrlr.c: 715:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode0' does not allow host 'nqn.2016-06.io.spdk:host0' 00:21:07.602 [2024-05-14 04:18:22.008428] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:3 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:21:07.602 [2024-05-14 04:18:22.008459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND SPECIFIC (01/84) qid:0 cid:3 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.603 [2024-05-14 04:18:22.008482] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode0 00:21:07.603 [2024-05-14 04:18:22.008491] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 132 00:21:07.603 [2024-05-14 04:18:22.008502] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:07.603 [2024-05-14 04:18:22.008511] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x613000003140 00:21:07.603 [2024-05-14 04:18:22.008537] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x613000003140 (9): Bad file descriptor 00:21:07.603 [2024-05-14 04:18:22.008552] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:21:07.603 [2024-05-14 04:18:22.008562] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:21:07.603 [2024-05-14 04:18:22.008573] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:21:07.603 [2024-05-14 04:18:22.008593] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:07.603 04:18:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:07.603 04:18:22 -- target/host_management.sh@87 -- # sleep 1 00:21:08.543 04:18:23 -- target/host_management.sh@91 -- # kill -9 4028739 00:21:08.543 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (4028739) - No such process 00:21:08.543 04:18:23 -- target/host_management.sh@91 -- # true 00:21:08.543 04:18:23 -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:21:08.543 04:18:23 -- target/host_management.sh@100 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:21:08.543 04:18:23 -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:21:08.543 04:18:23 -- nvmf/common.sh@520 -- # config=() 00:21:08.543 04:18:23 -- nvmf/common.sh@520 -- # local subsystem config 00:21:08.543 04:18:23 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:21:08.543 04:18:23 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:21:08.543 { 00:21:08.543 "params": { 00:21:08.543 "name": "Nvme$subsystem", 00:21:08.543 "trtype": "$TEST_TRANSPORT", 00:21:08.543 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:08.543 "adrfam": "ipv4", 00:21:08.543 "trsvcid": "$NVMF_PORT", 00:21:08.543 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:08.543 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:08.543 "hdgst": ${hdgst:-false}, 00:21:08.543 "ddgst": ${ddgst:-false} 00:21:08.543 }, 00:21:08.543 "method": "bdev_nvme_attach_controller" 00:21:08.543 } 00:21:08.543 EOF 00:21:08.543 )") 00:21:08.543 04:18:23 -- nvmf/common.sh@542 -- # cat 00:21:08.543 04:18:23 -- nvmf/common.sh@544 -- # jq . 00:21:08.543 04:18:23 -- nvmf/common.sh@545 -- # IFS=, 00:21:08.543 04:18:23 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:21:08.543 "params": { 00:21:08.543 "name": "Nvme0", 00:21:08.543 "trtype": "tcp", 00:21:08.543 "traddr": "10.0.0.2", 00:21:08.543 "adrfam": "ipv4", 00:21:08.543 "trsvcid": "4420", 00:21:08.543 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:21:08.543 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:21:08.543 "hdgst": false, 00:21:08.543 "ddgst": false 00:21:08.543 }, 00:21:08.543 "method": "bdev_nvme_attach_controller" 00:21:08.543 }' 00:21:08.543 [2024-05-14 04:18:23.100855] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:21:08.543 [2024-05-14 04:18:23.101004] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4029079 ] 00:21:08.805 EAL: No free 2048 kB hugepages reported on node 1 00:21:08.805 [2024-05-14 04:18:23.228683] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:08.805 [2024-05-14 04:18:23.319355] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:09.383 Running I/O for 1 seconds... 00:21:10.322 00:21:10.322 Latency(us) 00:21:10.322 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:10.322 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:10.322 Verification LBA range: start 0x0 length 0x400 00:21:10.322 Nvme0n1 : 1.01 4196.57 262.29 0.00 0.00 15056.65 2173.04 21661.37 00:21:10.322 =================================================================================================================== 00:21:10.322 Total : 4196.57 262.29 0.00 0.00 15056.65 2173.04 21661.37 00:21:10.583 04:18:25 -- target/host_management.sh@101 -- # stoptarget 00:21:10.583 04:18:25 -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:21:10.583 04:18:25 -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:21:10.583 04:18:25 -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:21:10.583 04:18:25 -- target/host_management.sh@40 -- # nvmftestfini 00:21:10.583 04:18:25 -- nvmf/common.sh@476 -- # nvmfcleanup 00:21:10.583 04:18:25 -- nvmf/common.sh@116 -- # sync 00:21:10.583 04:18:25 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:21:10.583 04:18:25 -- nvmf/common.sh@119 -- # set +e 00:21:10.583 04:18:25 -- nvmf/common.sh@120 -- # for i in {1..20} 00:21:10.583 04:18:25 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:21:10.583 rmmod nvme_tcp 00:21:10.583 rmmod nvme_fabrics 00:21:10.583 rmmod nvme_keyring 00:21:10.583 04:18:25 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:21:10.583 04:18:25 -- nvmf/common.sh@123 -- # set -e 00:21:10.583 04:18:25 -- nvmf/common.sh@124 -- # return 0 00:21:10.583 04:18:25 -- nvmf/common.sh@477 -- # '[' -n 4028420 ']' 00:21:10.583 04:18:25 -- nvmf/common.sh@478 -- # killprocess 4028420 00:21:10.583 04:18:25 -- common/autotest_common.sh@926 -- # '[' -z 4028420 ']' 00:21:10.583 04:18:25 -- common/autotest_common.sh@930 -- # kill -0 4028420 00:21:10.583 04:18:25 -- common/autotest_common.sh@931 -- # uname 00:21:10.583 04:18:25 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:21:10.583 04:18:25 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 4028420 00:21:10.583 04:18:25 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:21:10.583 04:18:25 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:21:10.583 04:18:25 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 4028420' 00:21:10.583 killing process with pid 4028420 00:21:10.583 04:18:25 -- common/autotest_common.sh@945 -- # kill 4028420 00:21:10.583 04:18:25 -- common/autotest_common.sh@950 -- # wait 4028420 00:21:11.154 [2024-05-14 04:18:25.619093] app.c: 605:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:21:11.154 04:18:25 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:21:11.154 04:18:25 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:21:11.154 04:18:25 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:21:11.154 04:18:25 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:11.154 04:18:25 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:21:11.154 04:18:25 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:11.154 04:18:25 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:11.154 04:18:25 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:13.694 04:18:27 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:21:13.694 00:21:13.694 real 0m7.574s 00:21:13.694 user 0m23.036s 00:21:13.694 sys 0m1.315s 00:21:13.694 04:18:27 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:13.694 04:18:27 -- common/autotest_common.sh@10 -- # set +x 00:21:13.694 ************************************ 00:21:13.694 END TEST nvmf_host_management 00:21:13.694 ************************************ 00:21:13.694 04:18:27 -- target/host_management.sh@108 -- # trap - SIGINT SIGTERM EXIT 00:21:13.695 00:21:13.695 real 0m13.246s 00:21:13.695 user 0m24.616s 00:21:13.695 sys 0m5.386s 00:21:13.695 04:18:27 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:13.695 04:18:27 -- common/autotest_common.sh@10 -- # set +x 00:21:13.695 ************************************ 00:21:13.695 END TEST nvmf_host_management 00:21:13.695 ************************************ 00:21:13.695 04:18:27 -- nvmf/nvmf.sh@47 -- # run_test nvmf_lvol /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:21:13.695 04:18:27 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:21:13.695 04:18:27 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:21:13.695 04:18:27 -- common/autotest_common.sh@10 -- # set +x 00:21:13.695 ************************************ 00:21:13.695 START TEST nvmf_lvol 00:21:13.695 ************************************ 00:21:13.695 04:18:27 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:21:13.695 * Looking for test storage... 00:21:13.695 * Found test storage at /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target 00:21:13.695 04:18:27 -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/common.sh 00:21:13.695 04:18:27 -- nvmf/common.sh@7 -- # uname -s 00:21:13.695 04:18:27 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:13.695 04:18:27 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:13.695 04:18:27 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:13.695 04:18:27 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:13.695 04:18:27 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:13.695 04:18:27 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:13.695 04:18:27 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:13.695 04:18:27 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:13.695 04:18:27 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:13.695 04:18:27 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:13.695 04:18:27 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80ef6226-405e-ee11-906e-a4bf01973fda 00:21:13.695 04:18:27 -- nvmf/common.sh@18 -- # NVME_HOSTID=80ef6226-405e-ee11-906e-a4bf01973fda 00:21:13.695 04:18:27 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:13.695 04:18:27 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:13.695 04:18:27 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:21:13.695 04:18:27 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/common.sh 00:21:13.695 04:18:27 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:13.695 04:18:27 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:13.695 04:18:27 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:13.695 04:18:27 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:13.695 04:18:27 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:13.695 04:18:27 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:13.695 04:18:27 -- paths/export.sh@5 -- # export PATH 00:21:13.695 04:18:27 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:13.695 04:18:27 -- nvmf/common.sh@46 -- # : 0 00:21:13.695 04:18:27 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:21:13.695 04:18:27 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:21:13.695 04:18:27 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:21:13.695 04:18:27 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:13.695 04:18:27 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:13.695 04:18:27 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:21:13.695 04:18:27 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:21:13.695 04:18:27 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:21:13.695 04:18:27 -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:21:13.695 04:18:27 -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:21:13.695 04:18:27 -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:21:13.695 04:18:27 -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:21:13.695 04:18:27 -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py 00:21:13.695 04:18:27 -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:21:13.695 04:18:27 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:21:13.695 04:18:27 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:13.695 04:18:27 -- nvmf/common.sh@436 -- # prepare_net_devs 00:21:13.695 04:18:27 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:21:13.695 04:18:27 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:21:13.695 04:18:27 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:13.695 04:18:27 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:13.695 04:18:27 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:13.695 04:18:27 -- nvmf/common.sh@402 -- # [[ phy-fallback != virt ]] 00:21:13.695 04:18:27 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:21:13.695 04:18:27 -- nvmf/common.sh@284 -- # xtrace_disable 00:21:13.695 04:18:27 -- common/autotest_common.sh@10 -- # set +x 00:21:18.969 04:18:33 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:21:18.969 04:18:33 -- nvmf/common.sh@290 -- # pci_devs=() 00:21:18.969 04:18:33 -- nvmf/common.sh@290 -- # local -a pci_devs 00:21:18.969 04:18:33 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:21:18.969 04:18:33 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:21:18.969 04:18:33 -- nvmf/common.sh@292 -- # pci_drivers=() 00:21:18.969 04:18:33 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:21:18.969 04:18:33 -- nvmf/common.sh@294 -- # net_devs=() 00:21:18.969 04:18:33 -- nvmf/common.sh@294 -- # local -ga net_devs 00:21:18.969 04:18:33 -- nvmf/common.sh@295 -- # e810=() 00:21:18.969 04:18:33 -- nvmf/common.sh@295 -- # local -ga e810 00:21:18.969 04:18:33 -- nvmf/common.sh@296 -- # x722=() 00:21:18.969 04:18:33 -- nvmf/common.sh@296 -- # local -ga x722 00:21:18.969 04:18:33 -- nvmf/common.sh@297 -- # mlx=() 00:21:18.969 04:18:33 -- nvmf/common.sh@297 -- # local -ga mlx 00:21:18.969 04:18:33 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:18.969 04:18:33 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:18.969 04:18:33 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:18.969 04:18:33 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:18.969 04:18:33 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:18.969 04:18:33 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:18.969 04:18:33 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:18.969 04:18:33 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:18.969 04:18:33 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:18.969 04:18:33 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:18.969 04:18:33 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:18.969 04:18:33 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:21:18.969 04:18:33 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:21:18.969 04:18:33 -- nvmf/common.sh@326 -- # [[ '' == mlx5 ]] 00:21:18.969 04:18:33 -- nvmf/common.sh@328 -- # [[ '' == e810 ]] 00:21:18.969 04:18:33 -- nvmf/common.sh@330 -- # [[ '' == x722 ]] 00:21:18.969 04:18:33 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:21:18.969 04:18:33 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:21:18.969 04:18:33 -- nvmf/common.sh@340 -- # echo 'Found 0000:27:00.0 (0x8086 - 0x159b)' 00:21:18.969 Found 0000:27:00.0 (0x8086 - 0x159b) 00:21:18.969 04:18:33 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:21:18.969 04:18:33 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:21:18.969 04:18:33 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:18.969 04:18:33 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:18.969 04:18:33 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:21:18.969 04:18:33 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:21:18.969 04:18:33 -- nvmf/common.sh@340 -- # echo 'Found 0000:27:00.1 (0x8086 - 0x159b)' 00:21:18.969 Found 0000:27:00.1 (0x8086 - 0x159b) 00:21:18.969 04:18:33 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:21:18.969 04:18:33 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:21:18.969 04:18:33 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:18.969 04:18:33 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:18.969 04:18:33 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:21:18.969 04:18:33 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:21:18.969 04:18:33 -- nvmf/common.sh@371 -- # [[ '' == e810 ]] 00:21:18.969 04:18:33 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:21:18.969 04:18:33 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:18.969 04:18:33 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:21:18.969 04:18:33 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:18.969 04:18:33 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:27:00.0: cvl_0_0' 00:21:18.969 Found net devices under 0000:27:00.0: cvl_0_0 00:21:18.969 04:18:33 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:21:18.969 04:18:33 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:21:18.969 04:18:33 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:18.969 04:18:33 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:21:18.969 04:18:33 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:18.969 04:18:33 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:27:00.1: cvl_0_1' 00:21:18.969 Found net devices under 0000:27:00.1: cvl_0_1 00:21:18.969 04:18:33 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:21:18.969 04:18:33 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:21:18.969 04:18:33 -- nvmf/common.sh@402 -- # is_hw=yes 00:21:18.969 04:18:33 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:21:18.969 04:18:33 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:21:18.969 04:18:33 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:21:18.969 04:18:33 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:18.969 04:18:33 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:18.969 04:18:33 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:18.969 04:18:33 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:21:18.969 04:18:33 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:18.969 04:18:33 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:18.969 04:18:33 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:21:18.969 04:18:33 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:18.969 04:18:33 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:18.969 04:18:33 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:21:18.969 04:18:33 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:21:18.969 04:18:33 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:21:18.969 04:18:33 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:18.969 04:18:33 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:18.969 04:18:33 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:18.969 04:18:33 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:21:18.969 04:18:33 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:18.969 04:18:33 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:18.969 04:18:33 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:18.969 04:18:33 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:21:18.969 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:18.969 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.579 ms 00:21:18.969 00:21:18.969 --- 10.0.0.2 ping statistics --- 00:21:18.969 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:18.969 rtt min/avg/max/mdev = 0.579/0.579/0.579/0.000 ms 00:21:18.969 04:18:33 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:18.969 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:18.969 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.393 ms 00:21:18.969 00:21:18.969 --- 10.0.0.1 ping statistics --- 00:21:18.969 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:18.969 rtt min/avg/max/mdev = 0.393/0.393/0.393/0.000 ms 00:21:18.969 04:18:33 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:18.969 04:18:33 -- nvmf/common.sh@410 -- # return 0 00:21:18.969 04:18:33 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:21:18.969 04:18:33 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:18.969 04:18:33 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:21:18.969 04:18:33 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:21:18.969 04:18:33 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:18.969 04:18:33 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:21:18.969 04:18:33 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:21:18.969 04:18:33 -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:21:18.969 04:18:33 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:21:18.969 04:18:33 -- common/autotest_common.sh@712 -- # xtrace_disable 00:21:18.969 04:18:33 -- common/autotest_common.sh@10 -- # set +x 00:21:18.969 04:18:33 -- nvmf/common.sh@469 -- # nvmfpid=4033383 00:21:18.969 04:18:33 -- nvmf/common.sh@470 -- # waitforlisten 4033383 00:21:18.969 04:18:33 -- common/autotest_common.sh@819 -- # '[' -z 4033383 ']' 00:21:18.969 04:18:33 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:18.969 04:18:33 -- common/autotest_common.sh@824 -- # local max_retries=100 00:21:18.969 04:18:33 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:18.969 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:18.969 04:18:33 -- common/autotest_common.sh@828 -- # xtrace_disable 00:21:18.969 04:18:33 -- common/autotest_common.sh@10 -- # set +x 00:21:18.969 04:18:33 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:21:18.969 [2024-05-14 04:18:33.435918] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:21:18.969 [2024-05-14 04:18:33.436023] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:18.969 EAL: No free 2048 kB hugepages reported on node 1 00:21:19.229 [2024-05-14 04:18:33.557394] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:21:19.229 [2024-05-14 04:18:33.649752] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:21:19.229 [2024-05-14 04:18:33.649944] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:19.229 [2024-05-14 04:18:33.649959] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:19.229 [2024-05-14 04:18:33.649968] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:19.229 [2024-05-14 04:18:33.650038] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:21:19.229 [2024-05-14 04:18:33.650065] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:21:19.229 [2024-05-14 04:18:33.650060] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:19.801 04:18:34 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:21:19.801 04:18:34 -- common/autotest_common.sh@852 -- # return 0 00:21:19.801 04:18:34 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:21:19.801 04:18:34 -- common/autotest_common.sh@718 -- # xtrace_disable 00:21:19.801 04:18:34 -- common/autotest_common.sh@10 -- # set +x 00:21:19.801 04:18:34 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:19.801 04:18:34 -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:21:19.801 [2024-05-14 04:18:34.321039] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:19.801 04:18:34 -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:21:20.095 04:18:34 -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:21:20.095 04:18:34 -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:21:20.353 04:18:34 -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:21:20.353 04:18:34 -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:21:20.353 04:18:34 -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:21:20.611 04:18:34 -- target/nvmf_lvol.sh@29 -- # lvs=739bcbf6-f383-4d95-a4df-3473dca3b170 00:21:20.611 04:18:34 -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 739bcbf6-f383-4d95-a4df-3473dca3b170 lvol 20 00:21:20.611 04:18:35 -- target/nvmf_lvol.sh@32 -- # lvol=6c3e86d8-c06d-480b-bcb3-5fdf1f9b587e 00:21:20.611 04:18:35 -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:21:20.870 04:18:35 -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 6c3e86d8-c06d-480b-bcb3-5fdf1f9b587e 00:21:20.870 04:18:35 -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:21:21.132 [2024-05-14 04:18:35.491534] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:21.132 04:18:35 -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:21:21.132 04:18:35 -- target/nvmf_lvol.sh@42 -- # perf_pid=4033944 00:21:21.132 04:18:35 -- target/nvmf_lvol.sh@44 -- # sleep 1 00:21:21.132 04:18:35 -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:21:21.392 EAL: No free 2048 kB hugepages reported on node 1 00:21:22.330 04:18:36 -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot 6c3e86d8-c06d-480b-bcb3-5fdf1f9b587e MY_SNAPSHOT 00:21:22.330 04:18:36 -- target/nvmf_lvol.sh@47 -- # snapshot=92dcff4c-fb6e-4946-b897-6b66ef4de586 00:21:22.330 04:18:36 -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize 6c3e86d8-c06d-480b-bcb3-5fdf1f9b587e 30 00:21:22.588 04:18:37 -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone 92dcff4c-fb6e-4946-b897-6b66ef4de586 MY_CLONE 00:21:22.588 04:18:37 -- target/nvmf_lvol.sh@49 -- # clone=0e91bf98-2671-44c8-a34d-4aea9be66b87 00:21:22.588 04:18:37 -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate 0e91bf98-2671-44c8-a34d-4aea9be66b87 00:21:23.158 04:18:37 -- target/nvmf_lvol.sh@53 -- # wait 4033944 00:21:33.143 Initializing NVMe Controllers 00:21:33.143 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:21:33.143 Controller IO queue size 128, less than required. 00:21:33.143 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:33.143 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:21:33.143 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:21:33.143 Initialization complete. Launching workers. 00:21:33.143 ======================================================== 00:21:33.143 Latency(us) 00:21:33.143 Device Information : IOPS MiB/s Average min max 00:21:33.143 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 14056.39 54.91 9108.43 689.39 64728.26 00:21:33.143 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 13987.89 54.64 9152.32 3746.05 66884.13 00:21:33.143 ======================================================== 00:21:33.143 Total : 28044.29 109.55 9130.32 689.39 66884.13 00:21:33.143 00:21:33.143 04:18:46 -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:21:33.143 04:18:46 -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 6c3e86d8-c06d-480b-bcb3-5fdf1f9b587e 00:21:33.143 04:18:46 -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 739bcbf6-f383-4d95-a4df-3473dca3b170 00:21:33.143 04:18:46 -- target/nvmf_lvol.sh@60 -- # rm -f 00:21:33.143 04:18:46 -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:21:33.143 04:18:46 -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:21:33.143 04:18:46 -- nvmf/common.sh@476 -- # nvmfcleanup 00:21:33.143 04:18:46 -- nvmf/common.sh@116 -- # sync 00:21:33.143 04:18:46 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:21:33.143 04:18:46 -- nvmf/common.sh@119 -- # set +e 00:21:33.143 04:18:46 -- nvmf/common.sh@120 -- # for i in {1..20} 00:21:33.143 04:18:46 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:21:33.143 rmmod nvme_tcp 00:21:33.143 rmmod nvme_fabrics 00:21:33.143 rmmod nvme_keyring 00:21:33.143 04:18:46 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:21:33.143 04:18:46 -- nvmf/common.sh@123 -- # set -e 00:21:33.143 04:18:46 -- nvmf/common.sh@124 -- # return 0 00:21:33.143 04:18:46 -- nvmf/common.sh@477 -- # '[' -n 4033383 ']' 00:21:33.143 04:18:46 -- nvmf/common.sh@478 -- # killprocess 4033383 00:21:33.143 04:18:46 -- common/autotest_common.sh@926 -- # '[' -z 4033383 ']' 00:21:33.143 04:18:46 -- common/autotest_common.sh@930 -- # kill -0 4033383 00:21:33.143 04:18:46 -- common/autotest_common.sh@931 -- # uname 00:21:33.143 04:18:46 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:21:33.143 04:18:46 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 4033383 00:21:33.143 04:18:46 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:21:33.143 04:18:46 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:21:33.143 04:18:46 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 4033383' 00:21:33.143 killing process with pid 4033383 00:21:33.143 04:18:46 -- common/autotest_common.sh@945 -- # kill 4033383 00:21:33.143 04:18:46 -- common/autotest_common.sh@950 -- # wait 4033383 00:21:33.143 04:18:47 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:21:33.143 04:18:47 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:21:33.143 04:18:47 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:21:33.143 04:18:47 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:33.143 04:18:47 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:21:33.143 04:18:47 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:33.143 04:18:47 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:33.143 04:18:47 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:35.051 04:18:49 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:21:35.051 00:21:35.051 real 0m21.556s 00:21:35.051 user 1m2.959s 00:21:35.051 sys 0m6.369s 00:21:35.051 04:18:49 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:35.051 04:18:49 -- common/autotest_common.sh@10 -- # set +x 00:21:35.051 ************************************ 00:21:35.051 END TEST nvmf_lvol 00:21:35.051 ************************************ 00:21:35.051 04:18:49 -- nvmf/nvmf.sh@48 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:21:35.051 04:18:49 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:21:35.051 04:18:49 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:21:35.051 04:18:49 -- common/autotest_common.sh@10 -- # set +x 00:21:35.051 ************************************ 00:21:35.051 START TEST nvmf_lvs_grow 00:21:35.051 ************************************ 00:21:35.051 04:18:49 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:21:35.051 * Looking for test storage... 00:21:35.051 * Found test storage at /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target 00:21:35.051 04:18:49 -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/common.sh 00:21:35.051 04:18:49 -- nvmf/common.sh@7 -- # uname -s 00:21:35.051 04:18:49 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:35.051 04:18:49 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:35.051 04:18:49 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:35.052 04:18:49 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:35.052 04:18:49 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:35.052 04:18:49 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:35.052 04:18:49 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:35.052 04:18:49 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:35.052 04:18:49 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:35.052 04:18:49 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:35.052 04:18:49 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80ef6226-405e-ee11-906e-a4bf01973fda 00:21:35.052 04:18:49 -- nvmf/common.sh@18 -- # NVME_HOSTID=80ef6226-405e-ee11-906e-a4bf01973fda 00:21:35.052 04:18:49 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:35.052 04:18:49 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:35.052 04:18:49 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:21:35.052 04:18:49 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/common.sh 00:21:35.052 04:18:49 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:35.052 04:18:49 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:35.052 04:18:49 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:35.052 04:18:49 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:35.052 04:18:49 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:35.052 04:18:49 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:35.052 04:18:49 -- paths/export.sh@5 -- # export PATH 00:21:35.052 04:18:49 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:35.052 04:18:49 -- nvmf/common.sh@46 -- # : 0 00:21:35.052 04:18:49 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:21:35.052 04:18:49 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:21:35.052 04:18:49 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:21:35.052 04:18:49 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:35.052 04:18:49 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:35.052 04:18:49 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:21:35.052 04:18:49 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:21:35.052 04:18:49 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:21:35.052 04:18:49 -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py 00:21:35.052 04:18:49 -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:21:35.052 04:18:49 -- target/nvmf_lvs_grow.sh@97 -- # nvmftestinit 00:21:35.052 04:18:49 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:21:35.052 04:18:49 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:35.052 04:18:49 -- nvmf/common.sh@436 -- # prepare_net_devs 00:21:35.052 04:18:49 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:21:35.052 04:18:49 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:21:35.052 04:18:49 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:35.052 04:18:49 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:35.052 04:18:49 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:35.052 04:18:49 -- nvmf/common.sh@402 -- # [[ phy-fallback != virt ]] 00:21:35.052 04:18:49 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:21:35.052 04:18:49 -- nvmf/common.sh@284 -- # xtrace_disable 00:21:35.052 04:18:49 -- common/autotest_common.sh@10 -- # set +x 00:21:40.331 04:18:54 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:21:40.331 04:18:54 -- nvmf/common.sh@290 -- # pci_devs=() 00:21:40.331 04:18:54 -- nvmf/common.sh@290 -- # local -a pci_devs 00:21:40.331 04:18:54 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:21:40.331 04:18:54 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:21:40.331 04:18:54 -- nvmf/common.sh@292 -- # pci_drivers=() 00:21:40.331 04:18:54 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:21:40.331 04:18:54 -- nvmf/common.sh@294 -- # net_devs=() 00:21:40.331 04:18:54 -- nvmf/common.sh@294 -- # local -ga net_devs 00:21:40.331 04:18:54 -- nvmf/common.sh@295 -- # e810=() 00:21:40.331 04:18:54 -- nvmf/common.sh@295 -- # local -ga e810 00:21:40.331 04:18:54 -- nvmf/common.sh@296 -- # x722=() 00:21:40.331 04:18:54 -- nvmf/common.sh@296 -- # local -ga x722 00:21:40.331 04:18:54 -- nvmf/common.sh@297 -- # mlx=() 00:21:40.331 04:18:54 -- nvmf/common.sh@297 -- # local -ga mlx 00:21:40.331 04:18:54 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:40.331 04:18:54 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:40.331 04:18:54 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:40.331 04:18:54 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:40.331 04:18:54 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:40.331 04:18:54 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:40.331 04:18:54 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:40.331 04:18:54 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:40.331 04:18:54 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:40.331 04:18:54 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:40.332 04:18:54 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:40.332 04:18:54 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:21:40.332 04:18:54 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:21:40.332 04:18:54 -- nvmf/common.sh@326 -- # [[ '' == mlx5 ]] 00:21:40.332 04:18:54 -- nvmf/common.sh@328 -- # [[ '' == e810 ]] 00:21:40.332 04:18:54 -- nvmf/common.sh@330 -- # [[ '' == x722 ]] 00:21:40.332 04:18:54 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:21:40.332 04:18:54 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:21:40.332 04:18:54 -- nvmf/common.sh@340 -- # echo 'Found 0000:27:00.0 (0x8086 - 0x159b)' 00:21:40.332 Found 0000:27:00.0 (0x8086 - 0x159b) 00:21:40.332 04:18:54 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:21:40.332 04:18:54 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:21:40.332 04:18:54 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:40.332 04:18:54 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:40.332 04:18:54 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:21:40.332 04:18:54 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:21:40.332 04:18:54 -- nvmf/common.sh@340 -- # echo 'Found 0000:27:00.1 (0x8086 - 0x159b)' 00:21:40.332 Found 0000:27:00.1 (0x8086 - 0x159b) 00:21:40.332 04:18:54 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:21:40.332 04:18:54 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:21:40.332 04:18:54 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:40.332 04:18:54 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:40.332 04:18:54 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:21:40.332 04:18:54 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:21:40.332 04:18:54 -- nvmf/common.sh@371 -- # [[ '' == e810 ]] 00:21:40.332 04:18:54 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:21:40.332 04:18:54 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:40.332 04:18:54 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:21:40.332 04:18:54 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:40.332 04:18:54 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:27:00.0: cvl_0_0' 00:21:40.332 Found net devices under 0000:27:00.0: cvl_0_0 00:21:40.332 04:18:54 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:21:40.332 04:18:54 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:21:40.332 04:18:54 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:40.332 04:18:54 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:21:40.332 04:18:54 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:40.332 04:18:54 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:27:00.1: cvl_0_1' 00:21:40.332 Found net devices under 0000:27:00.1: cvl_0_1 00:21:40.332 04:18:54 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:21:40.332 04:18:54 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:21:40.332 04:18:54 -- nvmf/common.sh@402 -- # is_hw=yes 00:21:40.332 04:18:54 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:21:40.332 04:18:54 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:21:40.332 04:18:54 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:21:40.332 04:18:54 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:40.332 04:18:54 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:40.332 04:18:54 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:40.332 04:18:54 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:21:40.332 04:18:54 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:40.332 04:18:54 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:40.332 04:18:54 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:21:40.332 04:18:54 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:40.332 04:18:54 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:40.332 04:18:54 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:21:40.332 04:18:54 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:21:40.332 04:18:54 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:21:40.332 04:18:54 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:40.590 04:18:54 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:40.590 04:18:54 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:40.590 04:18:54 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:21:40.590 04:18:54 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:40.590 04:18:55 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:40.590 04:18:55 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:40.590 04:18:55 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:21:40.590 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:40.590 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.208 ms 00:21:40.590 00:21:40.590 --- 10.0.0.2 ping statistics --- 00:21:40.590 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:40.590 rtt min/avg/max/mdev = 0.208/0.208/0.208/0.000 ms 00:21:40.590 04:18:55 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:40.590 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:40.590 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.393 ms 00:21:40.590 00:21:40.590 --- 10.0.0.1 ping statistics --- 00:21:40.590 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:40.590 rtt min/avg/max/mdev = 0.393/0.393/0.393/0.000 ms 00:21:40.590 04:18:55 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:40.590 04:18:55 -- nvmf/common.sh@410 -- # return 0 00:21:40.590 04:18:55 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:21:40.590 04:18:55 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:40.590 04:18:55 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:21:40.590 04:18:55 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:21:40.590 04:18:55 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:40.590 04:18:55 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:21:40.590 04:18:55 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:21:40.590 04:18:55 -- target/nvmf_lvs_grow.sh@98 -- # nvmfappstart -m 0x1 00:21:40.590 04:18:55 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:21:40.590 04:18:55 -- common/autotest_common.sh@712 -- # xtrace_disable 00:21:40.590 04:18:55 -- common/autotest_common.sh@10 -- # set +x 00:21:40.590 04:18:55 -- nvmf/common.sh@469 -- # nvmfpid=4039980 00:21:40.590 04:18:55 -- nvmf/common.sh@470 -- # waitforlisten 4039980 00:21:40.590 04:18:55 -- common/autotest_common.sh@819 -- # '[' -z 4039980 ']' 00:21:40.590 04:18:55 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:40.590 04:18:55 -- common/autotest_common.sh@824 -- # local max_retries=100 00:21:40.590 04:18:55 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:40.590 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:40.590 04:18:55 -- common/autotest_common.sh@828 -- # xtrace_disable 00:21:40.590 04:18:55 -- common/autotest_common.sh@10 -- # set +x 00:21:40.590 04:18:55 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:21:40.849 [2024-05-14 04:18:55.183868] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:21:40.849 [2024-05-14 04:18:55.183981] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:40.849 EAL: No free 2048 kB hugepages reported on node 1 00:21:40.849 [2024-05-14 04:18:55.312022] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:40.849 [2024-05-14 04:18:55.407524] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:21:40.849 [2024-05-14 04:18:55.407693] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:40.849 [2024-05-14 04:18:55.407708] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:40.849 [2024-05-14 04:18:55.407716] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:40.849 [2024-05-14 04:18:55.407743] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:41.417 04:18:55 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:21:41.417 04:18:55 -- common/autotest_common.sh@852 -- # return 0 00:21:41.417 04:18:55 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:21:41.417 04:18:55 -- common/autotest_common.sh@718 -- # xtrace_disable 00:21:41.417 04:18:55 -- common/autotest_common.sh@10 -- # set +x 00:21:41.417 04:18:55 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:41.417 04:18:55 -- target/nvmf_lvs_grow.sh@99 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:21:41.676 [2024-05-14 04:18:56.037813] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:41.676 04:18:56 -- target/nvmf_lvs_grow.sh@101 -- # run_test lvs_grow_clean lvs_grow 00:21:41.676 04:18:56 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:21:41.676 04:18:56 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:21:41.676 04:18:56 -- common/autotest_common.sh@10 -- # set +x 00:21:41.676 ************************************ 00:21:41.676 START TEST lvs_grow_clean 00:21:41.676 ************************************ 00:21:41.676 04:18:56 -- common/autotest_common.sh@1104 -- # lvs_grow 00:21:41.676 04:18:56 -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:21:41.676 04:18:56 -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:21:41.676 04:18:56 -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:21:41.676 04:18:56 -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:21:41.676 04:18:56 -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:21:41.676 04:18:56 -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:21:41.676 04:18:56 -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:21:41.676 04:18:56 -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:21:41.676 04:18:56 -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:21:41.676 04:18:56 -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:21:41.676 04:18:56 -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:21:41.934 04:18:56 -- target/nvmf_lvs_grow.sh@28 -- # lvs=1b3bfa87-2d4d-497b-be13-4983e11c0b15 00:21:41.934 04:18:56 -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 1b3bfa87-2d4d-497b-be13-4983e11c0b15 00:21:41.934 04:18:56 -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:21:41.934 04:18:56 -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:21:41.934 04:18:56 -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:21:41.934 04:18:56 -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 1b3bfa87-2d4d-497b-be13-4983e11c0b15 lvol 150 00:21:42.192 04:18:56 -- target/nvmf_lvs_grow.sh@33 -- # lvol=094cdbb3-66fd-479b-bf9f-df224c9d1d05 00:21:42.192 04:18:56 -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:21:42.192 04:18:56 -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:21:42.192 [2024-05-14 04:18:56.751997] bdev_aio.c: 959:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:21:42.192 [2024-05-14 04:18:56.752061] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:21:42.192 true 00:21:42.192 04:18:56 -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 1b3bfa87-2d4d-497b-be13-4983e11c0b15 00:21:42.192 04:18:56 -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:21:42.451 04:18:56 -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:21:42.451 04:18:56 -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:21:42.451 04:18:57 -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 094cdbb3-66fd-479b-bf9f-df224c9d1d05 00:21:42.710 04:18:57 -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:21:42.970 [2024-05-14 04:18:57.300475] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:42.970 04:18:57 -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:21:42.970 04:18:57 -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=4040603 00:21:42.970 04:18:57 -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:21:42.970 04:18:57 -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 4040603 /var/tmp/bdevperf.sock 00:21:42.970 04:18:57 -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:21:42.970 04:18:57 -- common/autotest_common.sh@819 -- # '[' -z 4040603 ']' 00:21:42.970 04:18:57 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:42.970 04:18:57 -- common/autotest_common.sh@824 -- # local max_retries=100 00:21:42.970 04:18:57 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:42.970 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:42.970 04:18:57 -- common/autotest_common.sh@828 -- # xtrace_disable 00:21:42.970 04:18:57 -- common/autotest_common.sh@10 -- # set +x 00:21:42.970 [2024-05-14 04:18:57.516897] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:21:42.970 [2024-05-14 04:18:57.517009] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4040603 ] 00:21:43.231 EAL: No free 2048 kB hugepages reported on node 1 00:21:43.231 [2024-05-14 04:18:57.633233] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:43.231 [2024-05-14 04:18:57.722505] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:21:43.798 04:18:58 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:21:43.798 04:18:58 -- common/autotest_common.sh@852 -- # return 0 00:21:43.798 04:18:58 -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:21:44.056 Nvme0n1 00:21:44.056 04:18:58 -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:21:44.056 [ 00:21:44.056 { 00:21:44.056 "name": "Nvme0n1", 00:21:44.056 "aliases": [ 00:21:44.056 "094cdbb3-66fd-479b-bf9f-df224c9d1d05" 00:21:44.056 ], 00:21:44.056 "product_name": "NVMe disk", 00:21:44.056 "block_size": 4096, 00:21:44.056 "num_blocks": 38912, 00:21:44.056 "uuid": "094cdbb3-66fd-479b-bf9f-df224c9d1d05", 00:21:44.056 "assigned_rate_limits": { 00:21:44.056 "rw_ios_per_sec": 0, 00:21:44.056 "rw_mbytes_per_sec": 0, 00:21:44.056 "r_mbytes_per_sec": 0, 00:21:44.056 "w_mbytes_per_sec": 0 00:21:44.056 }, 00:21:44.056 "claimed": false, 00:21:44.056 "zoned": false, 00:21:44.056 "supported_io_types": { 00:21:44.056 "read": true, 00:21:44.056 "write": true, 00:21:44.056 "unmap": true, 00:21:44.056 "write_zeroes": true, 00:21:44.056 "flush": true, 00:21:44.056 "reset": true, 00:21:44.056 "compare": true, 00:21:44.056 "compare_and_write": true, 00:21:44.056 "abort": true, 00:21:44.056 "nvme_admin": true, 00:21:44.056 "nvme_io": true 00:21:44.056 }, 00:21:44.056 "driver_specific": { 00:21:44.056 "nvme": [ 00:21:44.056 { 00:21:44.056 "trid": { 00:21:44.056 "trtype": "TCP", 00:21:44.056 "adrfam": "IPv4", 00:21:44.056 "traddr": "10.0.0.2", 00:21:44.056 "trsvcid": "4420", 00:21:44.056 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:21:44.056 }, 00:21:44.056 "ctrlr_data": { 00:21:44.056 "cntlid": 1, 00:21:44.056 "vendor_id": "0x8086", 00:21:44.056 "model_number": "SPDK bdev Controller", 00:21:44.056 "serial_number": "SPDK0", 00:21:44.056 "firmware_revision": "24.01.1", 00:21:44.056 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:21:44.056 "oacs": { 00:21:44.056 "security": 0, 00:21:44.056 "format": 0, 00:21:44.056 "firmware": 0, 00:21:44.056 "ns_manage": 0 00:21:44.056 }, 00:21:44.056 "multi_ctrlr": true, 00:21:44.056 "ana_reporting": false 00:21:44.056 }, 00:21:44.056 "vs": { 00:21:44.056 "nvme_version": "1.3" 00:21:44.056 }, 00:21:44.056 "ns_data": { 00:21:44.056 "id": 1, 00:21:44.056 "can_share": true 00:21:44.056 } 00:21:44.056 } 00:21:44.056 ], 00:21:44.056 "mp_policy": "active_passive" 00:21:44.056 } 00:21:44.056 } 00:21:44.056 ] 00:21:44.056 04:18:58 -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=4040769 00:21:44.056 04:18:58 -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:21:44.056 04:18:58 -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:21:44.056 Running I/O for 10 seconds... 00:21:45.437 Latency(us) 00:21:45.437 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:45.437 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:21:45.437 Nvme0n1 : 1.00 23581.00 92.11 0.00 0.00 0.00 0.00 0.00 00:21:45.437 =================================================================================================================== 00:21:45.437 Total : 23581.00 92.11 0.00 0.00 0.00 0.00 0.00 00:21:45.437 00:21:46.025 04:19:00 -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 1b3bfa87-2d4d-497b-be13-4983e11c0b15 00:21:46.284 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:21:46.284 Nvme0n1 : 2.00 23426.50 91.51 0.00 0.00 0.00 0.00 0.00 00:21:46.284 =================================================================================================================== 00:21:46.284 Total : 23426.50 91.51 0.00 0.00 0.00 0.00 0.00 00:21:46.284 00:21:46.284 true 00:21:46.285 04:19:00 -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 1b3bfa87-2d4d-497b-be13-4983e11c0b15 00:21:46.285 04:19:00 -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:21:46.285 04:19:00 -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:21:46.285 04:19:00 -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:21:46.285 04:19:00 -- target/nvmf_lvs_grow.sh@65 -- # wait 4040769 00:21:47.260 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:21:47.260 Nvme0n1 : 3.00 23423.00 91.50 0.00 0.00 0.00 0.00 0.00 00:21:47.260 =================================================================================================================== 00:21:47.260 Total : 23423.00 91.50 0.00 0.00 0.00 0.00 0.00 00:21:47.260 00:21:48.196 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:21:48.196 Nvme0n1 : 4.00 23521.25 91.88 0.00 0.00 0.00 0.00 0.00 00:21:48.196 =================================================================================================================== 00:21:48.196 Total : 23521.25 91.88 0.00 0.00 0.00 0.00 0.00 00:21:48.196 00:21:49.138 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:21:49.138 Nvme0n1 : 5.00 23543.40 91.97 0.00 0.00 0.00 0.00 0.00 00:21:49.138 =================================================================================================================== 00:21:49.138 Total : 23543.40 91.97 0.00 0.00 0.00 0.00 0.00 00:21:49.138 00:21:50.075 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:21:50.075 Nvme0n1 : 6.00 23584.83 92.13 0.00 0.00 0.00 0.00 0.00 00:21:50.075 =================================================================================================================== 00:21:50.075 Total : 23584.83 92.13 0.00 0.00 0.00 0.00 0.00 00:21:50.075 00:21:51.451 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:21:51.451 Nvme0n1 : 7.00 23591.57 92.15 0.00 0.00 0.00 0.00 0.00 00:21:51.451 =================================================================================================================== 00:21:51.451 Total : 23591.57 92.15 0.00 0.00 0.00 0.00 0.00 00:21:51.451 00:21:52.387 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:21:52.387 Nvme0n1 : 8.00 23610.62 92.23 0.00 0.00 0.00 0.00 0.00 00:21:52.387 =================================================================================================================== 00:21:52.387 Total : 23610.62 92.23 0.00 0.00 0.00 0.00 0.00 00:21:52.387 00:21:53.320 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:21:53.320 Nvme0n1 : 9.00 23630.78 92.31 0.00 0.00 0.00 0.00 0.00 00:21:53.320 =================================================================================================================== 00:21:53.320 Total : 23630.78 92.31 0.00 0.00 0.00 0.00 0.00 00:21:53.320 00:21:54.262 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:21:54.262 Nvme0n1 : 10.00 23642.90 92.36 0.00 0.00 0.00 0.00 0.00 00:21:54.262 =================================================================================================================== 00:21:54.262 Total : 23642.90 92.36 0.00 0.00 0.00 0.00 0.00 00:21:54.262 00:21:54.262 00:21:54.262 Latency(us) 00:21:54.262 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:54.262 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:21:54.262 Nvme0n1 : 10.01 23642.88 92.36 0.00 0.00 5409.99 1905.72 9657.94 00:21:54.262 =================================================================================================================== 00:21:54.262 Total : 23642.88 92.36 0.00 0.00 5409.99 1905.72 9657.94 00:21:54.262 0 00:21:54.262 04:19:08 -- target/nvmf_lvs_grow.sh@66 -- # killprocess 4040603 00:21:54.262 04:19:08 -- common/autotest_common.sh@926 -- # '[' -z 4040603 ']' 00:21:54.262 04:19:08 -- common/autotest_common.sh@930 -- # kill -0 4040603 00:21:54.262 04:19:08 -- common/autotest_common.sh@931 -- # uname 00:21:54.262 04:19:08 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:21:54.262 04:19:08 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 4040603 00:21:54.262 04:19:08 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:21:54.262 04:19:08 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:21:54.262 04:19:08 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 4040603' 00:21:54.262 killing process with pid 4040603 00:21:54.262 04:19:08 -- common/autotest_common.sh@945 -- # kill 4040603 00:21:54.262 Received shutdown signal, test time was about 10.000000 seconds 00:21:54.262 00:21:54.262 Latency(us) 00:21:54.262 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:54.262 =================================================================================================================== 00:21:54.262 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:54.262 04:19:08 -- common/autotest_common.sh@950 -- # wait 4040603 00:21:54.521 04:19:09 -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:21:54.780 04:19:09 -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 1b3bfa87-2d4d-497b-be13-4983e11c0b15 00:21:54.780 04:19:09 -- target/nvmf_lvs_grow.sh@69 -- # jq -r '.[0].free_clusters' 00:21:54.780 04:19:09 -- target/nvmf_lvs_grow.sh@69 -- # free_clusters=61 00:21:54.780 04:19:09 -- target/nvmf_lvs_grow.sh@71 -- # [[ '' == \d\i\r\t\y ]] 00:21:54.780 04:19:09 -- target/nvmf_lvs_grow.sh@83 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:21:55.040 [2024-05-14 04:19:09.487777] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:21:55.040 04:19:09 -- target/nvmf_lvs_grow.sh@84 -- # NOT /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 1b3bfa87-2d4d-497b-be13-4983e11c0b15 00:21:55.040 04:19:09 -- common/autotest_common.sh@640 -- # local es=0 00:21:55.040 04:19:09 -- common/autotest_common.sh@642 -- # valid_exec_arg /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 1b3bfa87-2d4d-497b-be13-4983e11c0b15 00:21:55.040 04:19:09 -- common/autotest_common.sh@628 -- # local arg=/var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py 00:21:55.040 04:19:09 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:21:55.040 04:19:09 -- common/autotest_common.sh@632 -- # type -t /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py 00:21:55.040 04:19:09 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:21:55.040 04:19:09 -- common/autotest_common.sh@634 -- # type -P /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py 00:21:55.040 04:19:09 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:21:55.040 04:19:09 -- common/autotest_common.sh@634 -- # arg=/var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py 00:21:55.040 04:19:09 -- common/autotest_common.sh@634 -- # [[ -x /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py ]] 00:21:55.040 04:19:09 -- common/autotest_common.sh@643 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 1b3bfa87-2d4d-497b-be13-4983e11c0b15 00:21:55.299 request: 00:21:55.299 { 00:21:55.299 "uuid": "1b3bfa87-2d4d-497b-be13-4983e11c0b15", 00:21:55.299 "method": "bdev_lvol_get_lvstores", 00:21:55.299 "req_id": 1 00:21:55.299 } 00:21:55.299 Got JSON-RPC error response 00:21:55.299 response: 00:21:55.299 { 00:21:55.299 "code": -19, 00:21:55.299 "message": "No such device" 00:21:55.299 } 00:21:55.299 04:19:09 -- common/autotest_common.sh@643 -- # es=1 00:21:55.299 04:19:09 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:21:55.299 04:19:09 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:21:55.299 04:19:09 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:21:55.299 04:19:09 -- target/nvmf_lvs_grow.sh@85 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:21:55.299 aio_bdev 00:21:55.299 04:19:09 -- target/nvmf_lvs_grow.sh@86 -- # waitforbdev 094cdbb3-66fd-479b-bf9f-df224c9d1d05 00:21:55.299 04:19:09 -- common/autotest_common.sh@887 -- # local bdev_name=094cdbb3-66fd-479b-bf9f-df224c9d1d05 00:21:55.299 04:19:09 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:21:55.299 04:19:09 -- common/autotest_common.sh@889 -- # local i 00:21:55.299 04:19:09 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:21:55.299 04:19:09 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:21:55.299 04:19:09 -- common/autotest_common.sh@892 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:21:55.559 04:19:09 -- common/autotest_common.sh@894 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 094cdbb3-66fd-479b-bf9f-df224c9d1d05 -t 2000 00:21:55.559 [ 00:21:55.559 { 00:21:55.559 "name": "094cdbb3-66fd-479b-bf9f-df224c9d1d05", 00:21:55.559 "aliases": [ 00:21:55.559 "lvs/lvol" 00:21:55.559 ], 00:21:55.559 "product_name": "Logical Volume", 00:21:55.559 "block_size": 4096, 00:21:55.559 "num_blocks": 38912, 00:21:55.559 "uuid": "094cdbb3-66fd-479b-bf9f-df224c9d1d05", 00:21:55.559 "assigned_rate_limits": { 00:21:55.559 "rw_ios_per_sec": 0, 00:21:55.559 "rw_mbytes_per_sec": 0, 00:21:55.559 "r_mbytes_per_sec": 0, 00:21:55.559 "w_mbytes_per_sec": 0 00:21:55.559 }, 00:21:55.559 "claimed": false, 00:21:55.559 "zoned": false, 00:21:55.559 "supported_io_types": { 00:21:55.559 "read": true, 00:21:55.559 "write": true, 00:21:55.559 "unmap": true, 00:21:55.559 "write_zeroes": true, 00:21:55.559 "flush": false, 00:21:55.559 "reset": true, 00:21:55.559 "compare": false, 00:21:55.559 "compare_and_write": false, 00:21:55.559 "abort": false, 00:21:55.559 "nvme_admin": false, 00:21:55.559 "nvme_io": false 00:21:55.559 }, 00:21:55.559 "driver_specific": { 00:21:55.559 "lvol": { 00:21:55.559 "lvol_store_uuid": "1b3bfa87-2d4d-497b-be13-4983e11c0b15", 00:21:55.559 "base_bdev": "aio_bdev", 00:21:55.559 "thin_provision": false, 00:21:55.559 "snapshot": false, 00:21:55.559 "clone": false, 00:21:55.559 "esnap_clone": false 00:21:55.559 } 00:21:55.559 } 00:21:55.559 } 00:21:55.559 ] 00:21:55.559 04:19:10 -- common/autotest_common.sh@895 -- # return 0 00:21:55.559 04:19:10 -- target/nvmf_lvs_grow.sh@87 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 1b3bfa87-2d4d-497b-be13-4983e11c0b15 00:21:55.559 04:19:10 -- target/nvmf_lvs_grow.sh@87 -- # jq -r '.[0].free_clusters' 00:21:55.820 04:19:10 -- target/nvmf_lvs_grow.sh@87 -- # (( free_clusters == 61 )) 00:21:55.821 04:19:10 -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].total_data_clusters' 00:21:55.821 04:19:10 -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 1b3bfa87-2d4d-497b-be13-4983e11c0b15 00:21:55.821 04:19:10 -- target/nvmf_lvs_grow.sh@88 -- # (( data_clusters == 99 )) 00:21:55.821 04:19:10 -- target/nvmf_lvs_grow.sh@91 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 094cdbb3-66fd-479b-bf9f-df224c9d1d05 00:21:56.082 04:19:10 -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 1b3bfa87-2d4d-497b-be13-4983e11c0b15 00:21:56.082 04:19:10 -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:21:56.341 04:19:10 -- target/nvmf_lvs_grow.sh@94 -- # rm -f /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:21:56.341 00:21:56.341 real 0m14.751s 00:21:56.341 user 0m12.289s 00:21:56.341 sys 0m2.138s 00:21:56.341 04:19:10 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:56.341 04:19:10 -- common/autotest_common.sh@10 -- # set +x 00:21:56.341 ************************************ 00:21:56.341 END TEST lvs_grow_clean 00:21:56.341 ************************************ 00:21:56.341 04:19:10 -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_dirty lvs_grow dirty 00:21:56.341 04:19:10 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:21:56.341 04:19:10 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:21:56.341 04:19:10 -- common/autotest_common.sh@10 -- # set +x 00:21:56.341 ************************************ 00:21:56.341 START TEST lvs_grow_dirty 00:21:56.341 ************************************ 00:21:56.341 04:19:10 -- common/autotest_common.sh@1104 -- # lvs_grow dirty 00:21:56.341 04:19:10 -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:21:56.341 04:19:10 -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:21:56.341 04:19:10 -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:21:56.341 04:19:10 -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:21:56.341 04:19:10 -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:21:56.341 04:19:10 -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:21:56.341 04:19:10 -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:21:56.341 04:19:10 -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:21:56.341 04:19:10 -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:21:56.600 04:19:11 -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:21:56.600 04:19:11 -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:21:56.600 04:19:11 -- target/nvmf_lvs_grow.sh@28 -- # lvs=ec12e1b4-a991-4fb0-bb6d-87a98f3a0783 00:21:56.600 04:19:11 -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ec12e1b4-a991-4fb0-bb6d-87a98f3a0783 00:21:56.600 04:19:11 -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:21:56.859 04:19:11 -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:21:56.859 04:19:11 -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:21:56.859 04:19:11 -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u ec12e1b4-a991-4fb0-bb6d-87a98f3a0783 lvol 150 00:21:56.859 04:19:11 -- target/nvmf_lvs_grow.sh@33 -- # lvol=2f929608-8dac-40b6-b8bc-c4012a6692cf 00:21:56.859 04:19:11 -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:21:56.859 04:19:11 -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:21:57.116 [2024-05-14 04:19:11.546045] bdev_aio.c: 959:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:21:57.116 [2024-05-14 04:19:11.546108] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:21:57.116 true 00:21:57.116 04:19:11 -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ec12e1b4-a991-4fb0-bb6d-87a98f3a0783 00:21:57.116 04:19:11 -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:21:57.116 04:19:11 -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:21:57.116 04:19:11 -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:21:57.375 04:19:11 -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 2f929608-8dac-40b6-b8bc-c4012a6692cf 00:21:57.375 04:19:11 -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:21:57.635 04:19:12 -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:21:57.895 04:19:12 -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=4043412 00:21:57.895 04:19:12 -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:21:57.895 04:19:12 -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 4043412 /var/tmp/bdevperf.sock 00:21:57.895 04:19:12 -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:21:57.895 04:19:12 -- common/autotest_common.sh@819 -- # '[' -z 4043412 ']' 00:21:57.895 04:19:12 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:57.895 04:19:12 -- common/autotest_common.sh@824 -- # local max_retries=100 00:21:57.895 04:19:12 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:57.895 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:57.895 04:19:12 -- common/autotest_common.sh@828 -- # xtrace_disable 00:21:57.895 04:19:12 -- common/autotest_common.sh@10 -- # set +x 00:21:57.895 [2024-05-14 04:19:12.312018] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:21:57.895 [2024-05-14 04:19:12.312134] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4043412 ] 00:21:57.895 EAL: No free 2048 kB hugepages reported on node 1 00:21:57.895 [2024-05-14 04:19:12.429477] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:58.153 [2024-05-14 04:19:12.526142] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:21:58.717 04:19:13 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:21:58.717 04:19:13 -- common/autotest_common.sh@852 -- # return 0 00:21:58.717 04:19:13 -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:21:58.975 Nvme0n1 00:21:58.975 04:19:13 -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:21:58.975 [ 00:21:58.975 { 00:21:58.975 "name": "Nvme0n1", 00:21:58.975 "aliases": [ 00:21:58.975 "2f929608-8dac-40b6-b8bc-c4012a6692cf" 00:21:58.975 ], 00:21:58.975 "product_name": "NVMe disk", 00:21:58.975 "block_size": 4096, 00:21:58.975 "num_blocks": 38912, 00:21:58.975 "uuid": "2f929608-8dac-40b6-b8bc-c4012a6692cf", 00:21:58.975 "assigned_rate_limits": { 00:21:58.975 "rw_ios_per_sec": 0, 00:21:58.975 "rw_mbytes_per_sec": 0, 00:21:58.975 "r_mbytes_per_sec": 0, 00:21:58.975 "w_mbytes_per_sec": 0 00:21:58.975 }, 00:21:58.975 "claimed": false, 00:21:58.975 "zoned": false, 00:21:58.975 "supported_io_types": { 00:21:58.975 "read": true, 00:21:58.975 "write": true, 00:21:58.975 "unmap": true, 00:21:58.975 "write_zeroes": true, 00:21:58.975 "flush": true, 00:21:58.975 "reset": true, 00:21:58.975 "compare": true, 00:21:58.975 "compare_and_write": true, 00:21:58.975 "abort": true, 00:21:58.975 "nvme_admin": true, 00:21:58.975 "nvme_io": true 00:21:58.975 }, 00:21:58.975 "driver_specific": { 00:21:58.975 "nvme": [ 00:21:58.975 { 00:21:58.975 "trid": { 00:21:58.975 "trtype": "TCP", 00:21:58.975 "adrfam": "IPv4", 00:21:58.975 "traddr": "10.0.0.2", 00:21:58.975 "trsvcid": "4420", 00:21:58.975 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:21:58.975 }, 00:21:58.975 "ctrlr_data": { 00:21:58.975 "cntlid": 1, 00:21:58.975 "vendor_id": "0x8086", 00:21:58.975 "model_number": "SPDK bdev Controller", 00:21:58.975 "serial_number": "SPDK0", 00:21:58.975 "firmware_revision": "24.01.1", 00:21:58.975 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:21:58.975 "oacs": { 00:21:58.975 "security": 0, 00:21:58.975 "format": 0, 00:21:58.975 "firmware": 0, 00:21:58.975 "ns_manage": 0 00:21:58.975 }, 00:21:58.975 "multi_ctrlr": true, 00:21:58.975 "ana_reporting": false 00:21:58.975 }, 00:21:58.975 "vs": { 00:21:58.975 "nvme_version": "1.3" 00:21:58.975 }, 00:21:58.975 "ns_data": { 00:21:58.975 "id": 1, 00:21:58.975 "can_share": true 00:21:58.975 } 00:21:58.975 } 00:21:58.975 ], 00:21:58.975 "mp_policy": "active_passive" 00:21:58.975 } 00:21:58.975 } 00:21:58.975 ] 00:21:58.975 04:19:13 -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=4043706 00:21:58.975 04:19:13 -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:21:58.975 04:19:13 -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:21:59.234 Running I/O for 10 seconds... 00:22:00.172 Latency(us) 00:22:00.172 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:00.172 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:22:00.172 Nvme0n1 : 1.00 24058.00 93.98 0.00 0.00 0.00 0.00 0.00 00:22:00.172 =================================================================================================================== 00:22:00.172 Total : 24058.00 93.98 0.00 0.00 0.00 0.00 0.00 00:22:00.172 00:22:01.104 04:19:15 -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u ec12e1b4-a991-4fb0-bb6d-87a98f3a0783 00:22:01.104 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:22:01.104 Nvme0n1 : 2.00 24220.50 94.61 0.00 0.00 0.00 0.00 0.00 00:22:01.104 =================================================================================================================== 00:22:01.104 Total : 24220.50 94.61 0.00 0.00 0.00 0.00 0.00 00:22:01.104 00:22:01.104 true 00:22:01.105 04:19:15 -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:22:01.105 04:19:15 -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ec12e1b4-a991-4fb0-bb6d-87a98f3a0783 00:22:01.364 04:19:15 -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:22:01.364 04:19:15 -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:22:01.364 04:19:15 -- target/nvmf_lvs_grow.sh@65 -- # wait 4043706 00:22:02.301 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:22:02.301 Nvme0n1 : 3.00 24232.67 94.66 0.00 0.00 0.00 0.00 0.00 00:22:02.301 =================================================================================================================== 00:22:02.301 Total : 24232.67 94.66 0.00 0.00 0.00 0.00 0.00 00:22:02.301 00:22:03.238 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:22:03.238 Nvme0n1 : 4.00 24267.50 94.79 0.00 0.00 0.00 0.00 0.00 00:22:03.238 =================================================================================================================== 00:22:03.238 Total : 24267.50 94.79 0.00 0.00 0.00 0.00 0.00 00:22:03.238 00:22:04.176 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:22:04.176 Nvme0n1 : 5.00 24290.60 94.89 0.00 0.00 0.00 0.00 0.00 00:22:04.176 =================================================================================================================== 00:22:04.176 Total : 24290.60 94.89 0.00 0.00 0.00 0.00 0.00 00:22:04.176 00:22:05.114 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:22:05.114 Nvme0n1 : 6.00 24316.83 94.99 0.00 0.00 0.00 0.00 0.00 00:22:05.114 =================================================================================================================== 00:22:05.114 Total : 24316.83 94.99 0.00 0.00 0.00 0.00 0.00 00:22:05.114 00:22:06.090 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:22:06.090 Nvme0n1 : 7.00 24212.71 94.58 0.00 0.00 0.00 0.00 0.00 00:22:06.090 =================================================================================================================== 00:22:06.090 Total : 24212.71 94.58 0.00 0.00 0.00 0.00 0.00 00:22:06.090 00:22:07.030 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:22:07.030 Nvme0n1 : 8.00 24232.12 94.66 0.00 0.00 0.00 0.00 0.00 00:22:07.030 =================================================================================================================== 00:22:07.030 Total : 24232.12 94.66 0.00 0.00 0.00 0.00 0.00 00:22:07.030 00:22:08.407 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:22:08.407 Nvme0n1 : 9.00 24246.89 94.71 0.00 0.00 0.00 0.00 0.00 00:22:08.408 =================================================================================================================== 00:22:08.408 Total : 24246.89 94.71 0.00 0.00 0.00 0.00 0.00 00:22:08.408 00:22:09.345 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:22:09.345 Nvme0n1 : 10.00 24264.50 94.78 0.00 0.00 0.00 0.00 0.00 00:22:09.345 =================================================================================================================== 00:22:09.345 Total : 24264.50 94.78 0.00 0.00 0.00 0.00 0.00 00:22:09.345 00:22:09.345 00:22:09.345 Latency(us) 00:22:09.345 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:09.345 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:22:09.345 Nvme0n1 : 10.00 24261.59 94.77 0.00 0.00 5272.06 2983.61 12003.44 00:22:09.345 =================================================================================================================== 00:22:09.345 Total : 24261.59 94.77 0.00 0.00 5272.06 2983.61 12003.44 00:22:09.345 0 00:22:09.345 04:19:23 -- target/nvmf_lvs_grow.sh@66 -- # killprocess 4043412 00:22:09.345 04:19:23 -- common/autotest_common.sh@926 -- # '[' -z 4043412 ']' 00:22:09.345 04:19:23 -- common/autotest_common.sh@930 -- # kill -0 4043412 00:22:09.345 04:19:23 -- common/autotest_common.sh@931 -- # uname 00:22:09.345 04:19:23 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:22:09.345 04:19:23 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 4043412 00:22:09.345 04:19:23 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:22:09.345 04:19:23 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:22:09.345 04:19:23 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 4043412' 00:22:09.345 killing process with pid 4043412 00:22:09.345 04:19:23 -- common/autotest_common.sh@945 -- # kill 4043412 00:22:09.345 Received shutdown signal, test time was about 10.000000 seconds 00:22:09.345 00:22:09.345 Latency(us) 00:22:09.345 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:09.345 =================================================================================================================== 00:22:09.345 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:09.345 04:19:23 -- common/autotest_common.sh@950 -- # wait 4043412 00:22:09.602 04:19:24 -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:22:09.602 04:19:24 -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ec12e1b4-a991-4fb0-bb6d-87a98f3a0783 00:22:09.602 04:19:24 -- target/nvmf_lvs_grow.sh@69 -- # jq -r '.[0].free_clusters' 00:22:09.860 04:19:24 -- target/nvmf_lvs_grow.sh@69 -- # free_clusters=61 00:22:09.860 04:19:24 -- target/nvmf_lvs_grow.sh@71 -- # [[ dirty == \d\i\r\t\y ]] 00:22:09.860 04:19:24 -- target/nvmf_lvs_grow.sh@73 -- # kill -9 4039980 00:22:09.860 04:19:24 -- target/nvmf_lvs_grow.sh@74 -- # wait 4039980 00:22:09.860 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 74: 4039980 Killed "${NVMF_APP[@]}" "$@" 00:22:09.860 04:19:24 -- target/nvmf_lvs_grow.sh@74 -- # true 00:22:09.860 04:19:24 -- target/nvmf_lvs_grow.sh@75 -- # nvmfappstart -m 0x1 00:22:09.860 04:19:24 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:22:09.860 04:19:24 -- common/autotest_common.sh@712 -- # xtrace_disable 00:22:09.860 04:19:24 -- common/autotest_common.sh@10 -- # set +x 00:22:09.860 04:19:24 -- nvmf/common.sh@469 -- # nvmfpid=4045824 00:22:09.860 04:19:24 -- nvmf/common.sh@470 -- # waitforlisten 4045824 00:22:09.860 04:19:24 -- common/autotest_common.sh@819 -- # '[' -z 4045824 ']' 00:22:09.860 04:19:24 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:09.860 04:19:24 -- common/autotest_common.sh@824 -- # local max_retries=100 00:22:09.860 04:19:24 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:09.860 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:09.860 04:19:24 -- common/autotest_common.sh@828 -- # xtrace_disable 00:22:09.860 04:19:24 -- common/autotest_common.sh@10 -- # set +x 00:22:09.860 04:19:24 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:22:09.860 [2024-05-14 04:19:24.403080] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:22:09.860 [2024-05-14 04:19:24.403180] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:10.118 EAL: No free 2048 kB hugepages reported on node 1 00:22:10.118 [2024-05-14 04:19:24.526985] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:10.118 [2024-05-14 04:19:24.617027] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:22:10.118 [2024-05-14 04:19:24.617189] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:10.118 [2024-05-14 04:19:24.617205] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:10.118 [2024-05-14 04:19:24.617213] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:10.118 [2024-05-14 04:19:24.617236] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:22:10.686 04:19:25 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:22:10.686 04:19:25 -- common/autotest_common.sh@852 -- # return 0 00:22:10.686 04:19:25 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:22:10.686 04:19:25 -- common/autotest_common.sh@718 -- # xtrace_disable 00:22:10.686 04:19:25 -- common/autotest_common.sh@10 -- # set +x 00:22:10.686 04:19:25 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:10.686 04:19:25 -- target/nvmf_lvs_grow.sh@76 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:22:10.686 [2024-05-14 04:19:25.238635] blobstore.c:4642:bs_recover: *NOTICE*: Performing recovery on blobstore 00:22:10.686 [2024-05-14 04:19:25.238756] blobstore.c:4589:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:22:10.686 [2024-05-14 04:19:25.238786] blobstore.c:4589:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:22:10.686 04:19:25 -- target/nvmf_lvs_grow.sh@76 -- # aio_bdev=aio_bdev 00:22:10.686 04:19:25 -- target/nvmf_lvs_grow.sh@77 -- # waitforbdev 2f929608-8dac-40b6-b8bc-c4012a6692cf 00:22:10.686 04:19:25 -- common/autotest_common.sh@887 -- # local bdev_name=2f929608-8dac-40b6-b8bc-c4012a6692cf 00:22:10.686 04:19:25 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:22:10.686 04:19:25 -- common/autotest_common.sh@889 -- # local i 00:22:10.686 04:19:25 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:22:10.686 04:19:25 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:22:10.686 04:19:25 -- common/autotest_common.sh@892 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:22:10.947 04:19:25 -- common/autotest_common.sh@894 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 2f929608-8dac-40b6-b8bc-c4012a6692cf -t 2000 00:22:10.947 [ 00:22:10.947 { 00:22:10.947 "name": "2f929608-8dac-40b6-b8bc-c4012a6692cf", 00:22:10.947 "aliases": [ 00:22:10.947 "lvs/lvol" 00:22:10.947 ], 00:22:10.947 "product_name": "Logical Volume", 00:22:10.947 "block_size": 4096, 00:22:10.947 "num_blocks": 38912, 00:22:10.947 "uuid": "2f929608-8dac-40b6-b8bc-c4012a6692cf", 00:22:10.947 "assigned_rate_limits": { 00:22:10.947 "rw_ios_per_sec": 0, 00:22:10.947 "rw_mbytes_per_sec": 0, 00:22:10.947 "r_mbytes_per_sec": 0, 00:22:10.947 "w_mbytes_per_sec": 0 00:22:10.947 }, 00:22:10.947 "claimed": false, 00:22:10.947 "zoned": false, 00:22:10.947 "supported_io_types": { 00:22:10.947 "read": true, 00:22:10.947 "write": true, 00:22:10.947 "unmap": true, 00:22:10.947 "write_zeroes": true, 00:22:10.947 "flush": false, 00:22:10.947 "reset": true, 00:22:10.947 "compare": false, 00:22:10.947 "compare_and_write": false, 00:22:10.947 "abort": false, 00:22:10.947 "nvme_admin": false, 00:22:10.947 "nvme_io": false 00:22:10.947 }, 00:22:10.947 "driver_specific": { 00:22:10.947 "lvol": { 00:22:10.947 "lvol_store_uuid": "ec12e1b4-a991-4fb0-bb6d-87a98f3a0783", 00:22:10.947 "base_bdev": "aio_bdev", 00:22:10.947 "thin_provision": false, 00:22:10.947 "snapshot": false, 00:22:10.947 "clone": false, 00:22:10.947 "esnap_clone": false 00:22:10.947 } 00:22:10.947 } 00:22:10.947 } 00:22:10.947 ] 00:22:10.947 04:19:25 -- common/autotest_common.sh@895 -- # return 0 00:22:10.947 04:19:25 -- target/nvmf_lvs_grow.sh@78 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ec12e1b4-a991-4fb0-bb6d-87a98f3a0783 00:22:10.947 04:19:25 -- target/nvmf_lvs_grow.sh@78 -- # jq -r '.[0].free_clusters' 00:22:11.206 04:19:25 -- target/nvmf_lvs_grow.sh@78 -- # (( free_clusters == 61 )) 00:22:11.206 04:19:25 -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ec12e1b4-a991-4fb0-bb6d-87a98f3a0783 00:22:11.206 04:19:25 -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].total_data_clusters' 00:22:11.464 04:19:25 -- target/nvmf_lvs_grow.sh@79 -- # (( data_clusters == 99 )) 00:22:11.464 04:19:25 -- target/nvmf_lvs_grow.sh@83 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:22:11.464 [2024-05-14 04:19:25.916837] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:22:11.464 04:19:25 -- target/nvmf_lvs_grow.sh@84 -- # NOT /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ec12e1b4-a991-4fb0-bb6d-87a98f3a0783 00:22:11.464 04:19:25 -- common/autotest_common.sh@640 -- # local es=0 00:22:11.464 04:19:25 -- common/autotest_common.sh@642 -- # valid_exec_arg /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ec12e1b4-a991-4fb0-bb6d-87a98f3a0783 00:22:11.464 04:19:25 -- common/autotest_common.sh@628 -- # local arg=/var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py 00:22:11.464 04:19:25 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:22:11.464 04:19:25 -- common/autotest_common.sh@632 -- # type -t /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py 00:22:11.464 04:19:25 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:22:11.465 04:19:25 -- common/autotest_common.sh@634 -- # type -P /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py 00:22:11.465 04:19:25 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:22:11.465 04:19:25 -- common/autotest_common.sh@634 -- # arg=/var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py 00:22:11.465 04:19:25 -- common/autotest_common.sh@634 -- # [[ -x /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py ]] 00:22:11.465 04:19:25 -- common/autotest_common.sh@643 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ec12e1b4-a991-4fb0-bb6d-87a98f3a0783 00:22:11.723 request: 00:22:11.723 { 00:22:11.723 "uuid": "ec12e1b4-a991-4fb0-bb6d-87a98f3a0783", 00:22:11.723 "method": "bdev_lvol_get_lvstores", 00:22:11.723 "req_id": 1 00:22:11.723 } 00:22:11.723 Got JSON-RPC error response 00:22:11.723 response: 00:22:11.723 { 00:22:11.723 "code": -19, 00:22:11.723 "message": "No such device" 00:22:11.723 } 00:22:11.723 04:19:26 -- common/autotest_common.sh@643 -- # es=1 00:22:11.723 04:19:26 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:22:11.723 04:19:26 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:22:11.723 04:19:26 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:22:11.723 04:19:26 -- target/nvmf_lvs_grow.sh@85 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:22:11.723 aio_bdev 00:22:11.723 04:19:26 -- target/nvmf_lvs_grow.sh@86 -- # waitforbdev 2f929608-8dac-40b6-b8bc-c4012a6692cf 00:22:11.723 04:19:26 -- common/autotest_common.sh@887 -- # local bdev_name=2f929608-8dac-40b6-b8bc-c4012a6692cf 00:22:11.723 04:19:26 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:22:11.723 04:19:26 -- common/autotest_common.sh@889 -- # local i 00:22:11.723 04:19:26 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:22:11.724 04:19:26 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:22:11.724 04:19:26 -- common/autotest_common.sh@892 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:22:11.982 04:19:26 -- common/autotest_common.sh@894 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 2f929608-8dac-40b6-b8bc-c4012a6692cf -t 2000 00:22:11.982 [ 00:22:11.982 { 00:22:11.982 "name": "2f929608-8dac-40b6-b8bc-c4012a6692cf", 00:22:11.982 "aliases": [ 00:22:11.982 "lvs/lvol" 00:22:11.982 ], 00:22:11.982 "product_name": "Logical Volume", 00:22:11.982 "block_size": 4096, 00:22:11.982 "num_blocks": 38912, 00:22:11.982 "uuid": "2f929608-8dac-40b6-b8bc-c4012a6692cf", 00:22:11.982 "assigned_rate_limits": { 00:22:11.982 "rw_ios_per_sec": 0, 00:22:11.982 "rw_mbytes_per_sec": 0, 00:22:11.982 "r_mbytes_per_sec": 0, 00:22:11.982 "w_mbytes_per_sec": 0 00:22:11.982 }, 00:22:11.982 "claimed": false, 00:22:11.982 "zoned": false, 00:22:11.982 "supported_io_types": { 00:22:11.982 "read": true, 00:22:11.982 "write": true, 00:22:11.982 "unmap": true, 00:22:11.982 "write_zeroes": true, 00:22:11.982 "flush": false, 00:22:11.982 "reset": true, 00:22:11.982 "compare": false, 00:22:11.982 "compare_and_write": false, 00:22:11.982 "abort": false, 00:22:11.982 "nvme_admin": false, 00:22:11.982 "nvme_io": false 00:22:11.982 }, 00:22:11.982 "driver_specific": { 00:22:11.982 "lvol": { 00:22:11.982 "lvol_store_uuid": "ec12e1b4-a991-4fb0-bb6d-87a98f3a0783", 00:22:11.982 "base_bdev": "aio_bdev", 00:22:11.982 "thin_provision": false, 00:22:11.982 "snapshot": false, 00:22:11.982 "clone": false, 00:22:11.982 "esnap_clone": false 00:22:11.982 } 00:22:11.982 } 00:22:11.982 } 00:22:11.982 ] 00:22:11.982 04:19:26 -- common/autotest_common.sh@895 -- # return 0 00:22:11.982 04:19:26 -- target/nvmf_lvs_grow.sh@87 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ec12e1b4-a991-4fb0-bb6d-87a98f3a0783 00:22:11.982 04:19:26 -- target/nvmf_lvs_grow.sh@87 -- # jq -r '.[0].free_clusters' 00:22:12.242 04:19:26 -- target/nvmf_lvs_grow.sh@87 -- # (( free_clusters == 61 )) 00:22:12.242 04:19:26 -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ec12e1b4-a991-4fb0-bb6d-87a98f3a0783 00:22:12.242 04:19:26 -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].total_data_clusters' 00:22:12.242 04:19:26 -- target/nvmf_lvs_grow.sh@88 -- # (( data_clusters == 99 )) 00:22:12.242 04:19:26 -- target/nvmf_lvs_grow.sh@91 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 2f929608-8dac-40b6-b8bc-c4012a6692cf 00:22:12.502 04:19:26 -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u ec12e1b4-a991-4fb0-bb6d-87a98f3a0783 00:22:12.502 04:19:27 -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:22:12.764 04:19:27 -- target/nvmf_lvs_grow.sh@94 -- # rm -f /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:22:12.764 00:22:12.764 real 0m16.366s 00:22:12.764 user 0m42.434s 00:22:12.764 sys 0m3.105s 00:22:12.764 04:19:27 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:12.764 04:19:27 -- common/autotest_common.sh@10 -- # set +x 00:22:12.764 ************************************ 00:22:12.764 END TEST lvs_grow_dirty 00:22:12.764 ************************************ 00:22:12.764 04:19:27 -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:22:12.764 04:19:27 -- common/autotest_common.sh@796 -- # type=--id 00:22:12.764 04:19:27 -- common/autotest_common.sh@797 -- # id=0 00:22:12.764 04:19:27 -- common/autotest_common.sh@798 -- # '[' --id = --pid ']' 00:22:12.764 04:19:27 -- common/autotest_common.sh@802 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:22:12.764 04:19:27 -- common/autotest_common.sh@802 -- # shm_files=nvmf_trace.0 00:22:12.764 04:19:27 -- common/autotest_common.sh@804 -- # [[ -z nvmf_trace.0 ]] 00:22:12.764 04:19:27 -- common/autotest_common.sh@808 -- # for n in $shm_files 00:22:12.764 04:19:27 -- common/autotest_common.sh@809 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/dsa-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:22:12.764 nvmf_trace.0 00:22:12.764 04:19:27 -- common/autotest_common.sh@811 -- # return 0 00:22:12.764 04:19:27 -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:22:12.764 04:19:27 -- nvmf/common.sh@476 -- # nvmfcleanup 00:22:12.764 04:19:27 -- nvmf/common.sh@116 -- # sync 00:22:12.764 04:19:27 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:22:12.764 04:19:27 -- nvmf/common.sh@119 -- # set +e 00:22:12.764 04:19:27 -- nvmf/common.sh@120 -- # for i in {1..20} 00:22:12.764 04:19:27 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:22:12.764 rmmod nvme_tcp 00:22:12.764 rmmod nvme_fabrics 00:22:12.764 rmmod nvme_keyring 00:22:12.764 04:19:27 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:22:12.764 04:19:27 -- nvmf/common.sh@123 -- # set -e 00:22:12.764 04:19:27 -- nvmf/common.sh@124 -- # return 0 00:22:12.764 04:19:27 -- nvmf/common.sh@477 -- # '[' -n 4045824 ']' 00:22:12.764 04:19:27 -- nvmf/common.sh@478 -- # killprocess 4045824 00:22:12.764 04:19:27 -- common/autotest_common.sh@926 -- # '[' -z 4045824 ']' 00:22:13.025 04:19:27 -- common/autotest_common.sh@930 -- # kill -0 4045824 00:22:13.025 04:19:27 -- common/autotest_common.sh@931 -- # uname 00:22:13.025 04:19:27 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:22:13.025 04:19:27 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 4045824 00:22:13.025 04:19:27 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:22:13.025 04:19:27 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:22:13.025 04:19:27 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 4045824' 00:22:13.025 killing process with pid 4045824 00:22:13.025 04:19:27 -- common/autotest_common.sh@945 -- # kill 4045824 00:22:13.025 04:19:27 -- common/autotest_common.sh@950 -- # wait 4045824 00:22:13.283 04:19:27 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:22:13.283 04:19:27 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:22:13.283 04:19:27 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:22:13.283 04:19:27 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:13.283 04:19:27 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:22:13.283 04:19:27 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:13.283 04:19:27 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:13.283 04:19:27 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:15.817 04:19:29 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:22:15.817 00:22:15.817 real 0m40.483s 00:22:15.817 user 0m59.918s 00:22:15.817 sys 0m9.861s 00:22:15.817 04:19:29 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:15.817 04:19:29 -- common/autotest_common.sh@10 -- # set +x 00:22:15.817 ************************************ 00:22:15.817 END TEST nvmf_lvs_grow 00:22:15.817 ************************************ 00:22:15.817 04:19:29 -- nvmf/nvmf.sh@49 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:22:15.817 04:19:29 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:22:15.817 04:19:29 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:22:15.817 04:19:29 -- common/autotest_common.sh@10 -- # set +x 00:22:15.817 ************************************ 00:22:15.817 START TEST nvmf_bdev_io_wait 00:22:15.817 ************************************ 00:22:15.817 04:19:29 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:22:15.817 * Looking for test storage... 00:22:15.817 * Found test storage at /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target 00:22:15.817 04:19:29 -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/common.sh 00:22:15.817 04:19:29 -- nvmf/common.sh@7 -- # uname -s 00:22:15.817 04:19:30 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:15.817 04:19:30 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:15.817 04:19:30 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:15.817 04:19:30 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:15.817 04:19:30 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:15.817 04:19:30 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:15.817 04:19:30 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:15.817 04:19:30 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:15.817 04:19:30 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:15.817 04:19:30 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:15.817 04:19:30 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80ef6226-405e-ee11-906e-a4bf01973fda 00:22:15.817 04:19:30 -- nvmf/common.sh@18 -- # NVME_HOSTID=80ef6226-405e-ee11-906e-a4bf01973fda 00:22:15.817 04:19:30 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:15.817 04:19:30 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:15.817 04:19:30 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:22:15.817 04:19:30 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/common.sh 00:22:15.817 04:19:30 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:15.817 04:19:30 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:15.817 04:19:30 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:15.817 04:19:30 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:15.817 04:19:30 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:15.818 04:19:30 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:15.818 04:19:30 -- paths/export.sh@5 -- # export PATH 00:22:15.818 04:19:30 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:15.818 04:19:30 -- nvmf/common.sh@46 -- # : 0 00:22:15.818 04:19:30 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:22:15.818 04:19:30 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:22:15.818 04:19:30 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:22:15.818 04:19:30 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:15.818 04:19:30 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:15.818 04:19:30 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:22:15.818 04:19:30 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:22:15.818 04:19:30 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:22:15.818 04:19:30 -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:22:15.818 04:19:30 -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:22:15.818 04:19:30 -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:22:15.818 04:19:30 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:22:15.818 04:19:30 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:15.818 04:19:30 -- nvmf/common.sh@436 -- # prepare_net_devs 00:22:15.818 04:19:30 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:22:15.818 04:19:30 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:22:15.818 04:19:30 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:15.818 04:19:30 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:15.818 04:19:30 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:15.818 04:19:30 -- nvmf/common.sh@402 -- # [[ phy-fallback != virt ]] 00:22:15.818 04:19:30 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:22:15.818 04:19:30 -- nvmf/common.sh@284 -- # xtrace_disable 00:22:15.818 04:19:30 -- common/autotest_common.sh@10 -- # set +x 00:22:21.095 04:19:34 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:22:21.095 04:19:34 -- nvmf/common.sh@290 -- # pci_devs=() 00:22:21.095 04:19:34 -- nvmf/common.sh@290 -- # local -a pci_devs 00:22:21.095 04:19:34 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:22:21.095 04:19:34 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:22:21.095 04:19:34 -- nvmf/common.sh@292 -- # pci_drivers=() 00:22:21.095 04:19:34 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:22:21.095 04:19:34 -- nvmf/common.sh@294 -- # net_devs=() 00:22:21.095 04:19:34 -- nvmf/common.sh@294 -- # local -ga net_devs 00:22:21.095 04:19:34 -- nvmf/common.sh@295 -- # e810=() 00:22:21.095 04:19:34 -- nvmf/common.sh@295 -- # local -ga e810 00:22:21.095 04:19:34 -- nvmf/common.sh@296 -- # x722=() 00:22:21.095 04:19:34 -- nvmf/common.sh@296 -- # local -ga x722 00:22:21.095 04:19:34 -- nvmf/common.sh@297 -- # mlx=() 00:22:21.095 04:19:34 -- nvmf/common.sh@297 -- # local -ga mlx 00:22:21.095 04:19:34 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:21.095 04:19:34 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:21.095 04:19:34 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:21.095 04:19:34 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:21.095 04:19:34 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:21.095 04:19:34 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:21.095 04:19:34 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:21.095 04:19:34 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:21.095 04:19:34 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:21.095 04:19:34 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:21.095 04:19:34 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:21.095 04:19:34 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:22:21.095 04:19:34 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:22:21.095 04:19:34 -- nvmf/common.sh@326 -- # [[ '' == mlx5 ]] 00:22:21.095 04:19:34 -- nvmf/common.sh@328 -- # [[ '' == e810 ]] 00:22:21.095 04:19:34 -- nvmf/common.sh@330 -- # [[ '' == x722 ]] 00:22:21.095 04:19:34 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:22:21.095 04:19:34 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:22:21.095 04:19:34 -- nvmf/common.sh@340 -- # echo 'Found 0000:27:00.0 (0x8086 - 0x159b)' 00:22:21.095 Found 0000:27:00.0 (0x8086 - 0x159b) 00:22:21.095 04:19:34 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:22:21.095 04:19:34 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:22:21.095 04:19:34 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:21.095 04:19:34 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:21.095 04:19:34 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:22:21.095 04:19:34 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:22:21.095 04:19:34 -- nvmf/common.sh@340 -- # echo 'Found 0000:27:00.1 (0x8086 - 0x159b)' 00:22:21.095 Found 0000:27:00.1 (0x8086 - 0x159b) 00:22:21.095 04:19:34 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:22:21.095 04:19:34 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:22:21.095 04:19:34 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:21.095 04:19:34 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:21.095 04:19:34 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:22:21.095 04:19:34 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:22:21.095 04:19:34 -- nvmf/common.sh@371 -- # [[ '' == e810 ]] 00:22:21.095 04:19:34 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:22:21.095 04:19:34 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:21.095 04:19:34 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:22:21.095 04:19:34 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:21.095 04:19:34 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:27:00.0: cvl_0_0' 00:22:21.095 Found net devices under 0000:27:00.0: cvl_0_0 00:22:21.095 04:19:34 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:22:21.095 04:19:34 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:22:21.095 04:19:34 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:21.095 04:19:34 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:22:21.095 04:19:34 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:21.095 04:19:34 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:27:00.1: cvl_0_1' 00:22:21.096 Found net devices under 0000:27:00.1: cvl_0_1 00:22:21.096 04:19:34 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:22:21.096 04:19:34 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:22:21.096 04:19:34 -- nvmf/common.sh@402 -- # is_hw=yes 00:22:21.096 04:19:34 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:22:21.096 04:19:34 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:22:21.096 04:19:34 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:22:21.096 04:19:34 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:21.096 04:19:34 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:21.096 04:19:34 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:21.096 04:19:34 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:22:21.096 04:19:34 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:21.096 04:19:34 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:21.096 04:19:34 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:22:21.096 04:19:34 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:21.096 04:19:34 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:21.096 04:19:34 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:22:21.096 04:19:34 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:22:21.096 04:19:34 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:22:21.096 04:19:34 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:21.096 04:19:34 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:21.096 04:19:34 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:21.096 04:19:34 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:22:21.096 04:19:34 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:21.096 04:19:34 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:21.096 04:19:34 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:21.096 04:19:34 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:22:21.096 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:21.096 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.704 ms 00:22:21.096 00:22:21.096 --- 10.0.0.2 ping statistics --- 00:22:21.096 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:21.096 rtt min/avg/max/mdev = 0.704/0.704/0.704/0.000 ms 00:22:21.096 04:19:34 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:21.096 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:21.096 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.371 ms 00:22:21.096 00:22:21.096 --- 10.0.0.1 ping statistics --- 00:22:21.096 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:21.096 rtt min/avg/max/mdev = 0.371/0.371/0.371/0.000 ms 00:22:21.096 04:19:35 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:21.096 04:19:35 -- nvmf/common.sh@410 -- # return 0 00:22:21.096 04:19:35 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:22:21.096 04:19:35 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:21.096 04:19:35 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:22:21.096 04:19:35 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:22:21.096 04:19:35 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:21.096 04:19:35 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:22:21.096 04:19:35 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:22:21.096 04:19:35 -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:22:21.096 04:19:35 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:22:21.096 04:19:35 -- common/autotest_common.sh@712 -- # xtrace_disable 00:22:21.096 04:19:35 -- common/autotest_common.sh@10 -- # set +x 00:22:21.096 04:19:35 -- nvmf/common.sh@469 -- # nvmfpid=4050387 00:22:21.096 04:19:35 -- nvmf/common.sh@470 -- # waitforlisten 4050387 00:22:21.096 04:19:35 -- common/autotest_common.sh@819 -- # '[' -z 4050387 ']' 00:22:21.096 04:19:35 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:21.096 04:19:35 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:22:21.096 04:19:35 -- common/autotest_common.sh@824 -- # local max_retries=100 00:22:21.096 04:19:35 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:21.096 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:21.096 04:19:35 -- common/autotest_common.sh@828 -- # xtrace_disable 00:22:21.096 04:19:35 -- common/autotest_common.sh@10 -- # set +x 00:22:21.096 [2024-05-14 04:19:35.087555] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:22:21.096 [2024-05-14 04:19:35.087627] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:21.096 EAL: No free 2048 kB hugepages reported on node 1 00:22:21.096 [2024-05-14 04:19:35.179201] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:21.096 [2024-05-14 04:19:35.274085] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:22:21.096 [2024-05-14 04:19:35.274256] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:21.096 [2024-05-14 04:19:35.274269] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:21.096 [2024-05-14 04:19:35.274277] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:21.096 [2024-05-14 04:19:35.274425] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:22:21.096 [2024-05-14 04:19:35.274448] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:22:21.096 [2024-05-14 04:19:35.274551] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:22:21.096 [2024-05-14 04:19:35.274562] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:22:21.355 04:19:35 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:22:21.355 04:19:35 -- common/autotest_common.sh@852 -- # return 0 00:22:21.355 04:19:35 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:22:21.355 04:19:35 -- common/autotest_common.sh@718 -- # xtrace_disable 00:22:21.355 04:19:35 -- common/autotest_common.sh@10 -- # set +x 00:22:21.355 04:19:35 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:21.355 04:19:35 -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:22:21.355 04:19:35 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:21.355 04:19:35 -- common/autotest_common.sh@10 -- # set +x 00:22:21.355 04:19:35 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:21.355 04:19:35 -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:22:21.355 04:19:35 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:21.355 04:19:35 -- common/autotest_common.sh@10 -- # set +x 00:22:21.615 04:19:35 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:21.615 04:19:35 -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:21.615 04:19:35 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:21.615 04:19:35 -- common/autotest_common.sh@10 -- # set +x 00:22:21.615 [2024-05-14 04:19:35.950038] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:21.615 04:19:35 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:21.615 04:19:35 -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:22:21.615 04:19:35 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:21.615 04:19:35 -- common/autotest_common.sh@10 -- # set +x 00:22:21.615 Malloc0 00:22:21.615 04:19:36 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:21.615 04:19:36 -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:22:21.615 04:19:36 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:21.615 04:19:36 -- common/autotest_common.sh@10 -- # set +x 00:22:21.615 04:19:36 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:21.615 04:19:36 -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:22:21.615 04:19:36 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:21.615 04:19:36 -- common/autotest_common.sh@10 -- # set +x 00:22:21.615 04:19:36 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:21.615 04:19:36 -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:21.615 04:19:36 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:21.615 04:19:36 -- common/autotest_common.sh@10 -- # set +x 00:22:21.615 [2024-05-14 04:19:36.034335] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:21.615 04:19:36 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:21.615 04:19:36 -- target/bdev_io_wait.sh@28 -- # WRITE_PID=4050450 00:22:21.615 04:19:36 -- target/bdev_io_wait.sh@30 -- # READ_PID=4050452 00:22:21.615 04:19:36 -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=4050455 00:22:21.615 04:19:36 -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:22:21.615 04:19:36 -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:22:21.615 04:19:36 -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=4050457 00:22:21.615 04:19:36 -- target/bdev_io_wait.sh@35 -- # sync 00:22:21.615 04:19:36 -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:22:21.615 04:19:36 -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:22:21.615 04:19:36 -- nvmf/common.sh@520 -- # config=() 00:22:21.615 04:19:36 -- nvmf/common.sh@520 -- # local subsystem config 00:22:21.615 04:19:36 -- nvmf/common.sh@520 -- # config=() 00:22:21.615 04:19:36 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:22:21.615 04:19:36 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:22:21.615 { 00:22:21.615 "params": { 00:22:21.615 "name": "Nvme$subsystem", 00:22:21.615 "trtype": "$TEST_TRANSPORT", 00:22:21.615 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:21.615 "adrfam": "ipv4", 00:22:21.615 "trsvcid": "$NVMF_PORT", 00:22:21.615 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:21.615 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:21.615 "hdgst": ${hdgst:-false}, 00:22:21.615 "ddgst": ${ddgst:-false} 00:22:21.615 }, 00:22:21.615 "method": "bdev_nvme_attach_controller" 00:22:21.615 } 00:22:21.615 EOF 00:22:21.615 )") 00:22:21.615 04:19:36 -- nvmf/common.sh@520 -- # local subsystem config 00:22:21.615 04:19:36 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:22:21.615 04:19:36 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:22:21.615 { 00:22:21.615 "params": { 00:22:21.615 "name": "Nvme$subsystem", 00:22:21.615 "trtype": "$TEST_TRANSPORT", 00:22:21.615 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:21.615 "adrfam": "ipv4", 00:22:21.615 "trsvcid": "$NVMF_PORT", 00:22:21.615 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:21.615 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:21.615 "hdgst": ${hdgst:-false}, 00:22:21.615 "ddgst": ${ddgst:-false} 00:22:21.615 }, 00:22:21.615 "method": "bdev_nvme_attach_controller" 00:22:21.615 } 00:22:21.615 EOF 00:22:21.615 )") 00:22:21.615 04:19:36 -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:22:21.615 04:19:36 -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:22:21.615 04:19:36 -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:22:21.615 04:19:36 -- nvmf/common.sh@520 -- # config=() 00:22:21.615 04:19:36 -- nvmf/common.sh@520 -- # local subsystem config 00:22:21.615 04:19:36 -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:22:21.615 04:19:36 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:22:21.615 04:19:36 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:22:21.615 { 00:22:21.615 "params": { 00:22:21.615 "name": "Nvme$subsystem", 00:22:21.615 "trtype": "$TEST_TRANSPORT", 00:22:21.615 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:21.615 "adrfam": "ipv4", 00:22:21.615 "trsvcid": "$NVMF_PORT", 00:22:21.615 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:21.615 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:21.615 "hdgst": ${hdgst:-false}, 00:22:21.615 "ddgst": ${ddgst:-false} 00:22:21.615 }, 00:22:21.615 "method": "bdev_nvme_attach_controller" 00:22:21.615 } 00:22:21.615 EOF 00:22:21.615 )") 00:22:21.615 04:19:36 -- nvmf/common.sh@520 -- # config=() 00:22:21.615 04:19:36 -- nvmf/common.sh@520 -- # local subsystem config 00:22:21.615 04:19:36 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:22:21.615 04:19:36 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:22:21.615 { 00:22:21.615 "params": { 00:22:21.615 "name": "Nvme$subsystem", 00:22:21.615 "trtype": "$TEST_TRANSPORT", 00:22:21.615 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:21.615 "adrfam": "ipv4", 00:22:21.615 "trsvcid": "$NVMF_PORT", 00:22:21.615 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:21.615 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:21.615 "hdgst": ${hdgst:-false}, 00:22:21.615 "ddgst": ${ddgst:-false} 00:22:21.615 }, 00:22:21.615 "method": "bdev_nvme_attach_controller" 00:22:21.615 } 00:22:21.615 EOF 00:22:21.615 )") 00:22:21.615 04:19:36 -- target/bdev_io_wait.sh@37 -- # wait 4050450 00:22:21.615 04:19:36 -- nvmf/common.sh@542 -- # cat 00:22:21.615 04:19:36 -- nvmf/common.sh@542 -- # cat 00:22:21.616 04:19:36 -- nvmf/common.sh@542 -- # cat 00:22:21.616 04:19:36 -- nvmf/common.sh@542 -- # cat 00:22:21.616 04:19:36 -- nvmf/common.sh@544 -- # jq . 00:22:21.616 04:19:36 -- nvmf/common.sh@544 -- # jq . 00:22:21.616 04:19:36 -- nvmf/common.sh@544 -- # jq . 00:22:21.616 04:19:36 -- nvmf/common.sh@544 -- # jq . 00:22:21.616 04:19:36 -- nvmf/common.sh@545 -- # IFS=, 00:22:21.616 04:19:36 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:22:21.616 "params": { 00:22:21.616 "name": "Nvme1", 00:22:21.616 "trtype": "tcp", 00:22:21.616 "traddr": "10.0.0.2", 00:22:21.616 "adrfam": "ipv4", 00:22:21.616 "trsvcid": "4420", 00:22:21.616 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:21.616 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:21.616 "hdgst": false, 00:22:21.616 "ddgst": false 00:22:21.616 }, 00:22:21.616 "method": "bdev_nvme_attach_controller" 00:22:21.616 }' 00:22:21.616 04:19:36 -- nvmf/common.sh@545 -- # IFS=, 00:22:21.616 04:19:36 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:22:21.616 "params": { 00:22:21.616 "name": "Nvme1", 00:22:21.616 "trtype": "tcp", 00:22:21.616 "traddr": "10.0.0.2", 00:22:21.616 "adrfam": "ipv4", 00:22:21.616 "trsvcid": "4420", 00:22:21.616 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:21.616 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:21.616 "hdgst": false, 00:22:21.616 "ddgst": false 00:22:21.616 }, 00:22:21.616 "method": "bdev_nvme_attach_controller" 00:22:21.616 }' 00:22:21.616 04:19:36 -- nvmf/common.sh@545 -- # IFS=, 00:22:21.616 04:19:36 -- nvmf/common.sh@545 -- # IFS=, 00:22:21.616 04:19:36 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:22:21.616 "params": { 00:22:21.616 "name": "Nvme1", 00:22:21.616 "trtype": "tcp", 00:22:21.616 "traddr": "10.0.0.2", 00:22:21.616 "adrfam": "ipv4", 00:22:21.616 "trsvcid": "4420", 00:22:21.616 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:21.616 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:21.616 "hdgst": false, 00:22:21.616 "ddgst": false 00:22:21.616 }, 00:22:21.616 "method": "bdev_nvme_attach_controller" 00:22:21.616 }' 00:22:21.616 04:19:36 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:22:21.616 "params": { 00:22:21.616 "name": "Nvme1", 00:22:21.616 "trtype": "tcp", 00:22:21.616 "traddr": "10.0.0.2", 00:22:21.616 "adrfam": "ipv4", 00:22:21.616 "trsvcid": "4420", 00:22:21.616 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:21.616 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:21.616 "hdgst": false, 00:22:21.616 "ddgst": false 00:22:21.616 }, 00:22:21.616 "method": "bdev_nvme_attach_controller" 00:22:21.616 }' 00:22:21.616 [2024-05-14 04:19:36.093462] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:22:21.616 [2024-05-14 04:19:36.093555] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:22:21.616 [2024-05-14 04:19:36.095817] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:22:21.616 [2024-05-14 04:19:36.095904] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:22:21.616 [2024-05-14 04:19:36.110055] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:22:21.616 [2024-05-14 04:19:36.110146] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:22:21.616 [2024-05-14 04:19:36.110175] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:22:21.616 [2024-05-14 04:19:36.110263] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:22:21.616 EAL: No free 2048 kB hugepages reported on node 1 00:22:21.876 EAL: No free 2048 kB hugepages reported on node 1 00:22:21.876 [2024-05-14 04:19:36.278750] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:21.876 EAL: No free 2048 kB hugepages reported on node 1 00:22:21.876 EAL: No free 2048 kB hugepages reported on node 1 00:22:21.876 [2024-05-14 04:19:36.379366] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:21.876 [2024-05-14 04:19:36.406110] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 7 00:22:21.876 [2024-05-14 04:19:36.424565] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:22.137 [2024-05-14 04:19:36.515954] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:22:22.137 [2024-05-14 04:19:36.527761] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:22.137 [2024-05-14 04:19:36.561145] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:22:22.137 [2024-05-14 04:19:36.658844] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:22:22.397 Running I/O for 1 seconds... 00:22:22.397 Running I/O for 1 seconds... 00:22:22.656 Running I/O for 1 seconds... 00:22:22.656 Running I/O for 1 seconds... 00:22:23.222 00:22:23.222 Latency(us) 00:22:23.222 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:23.222 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:22:23.222 Nvme1n1 : 1.02 9541.01 37.27 0.00 0.00 13229.72 5484.33 27318.16 00:22:23.222 =================================================================================================================== 00:22:23.223 Total : 9541.01 37.27 0.00 0.00 13229.72 5484.33 27318.16 00:22:23.482 00:22:23.482 Latency(us) 00:22:23.482 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:23.482 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:22:23.482 Nvme1n1 : 1.01 7532.72 29.42 0.00 0.00 16885.16 10209.82 30215.55 00:22:23.482 =================================================================================================================== 00:22:23.482 Total : 7532.72 29.42 0.00 0.00 16885.16 10209.82 30215.55 00:22:23.482 00:22:23.482 Latency(us) 00:22:23.482 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:23.482 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:22:23.482 Nvme1n1 : 1.00 164498.11 642.57 0.00 0.00 774.99 217.73 1138.26 00:22:23.482 =================================================================================================================== 00:22:23.482 Total : 164498.11 642.57 0.00 0.00 774.99 217.73 1138.26 00:22:23.482 00:22:23.482 Latency(us) 00:22:23.482 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:23.482 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:22:23.482 Nvme1n1 : 1.00 8921.38 34.85 0.00 0.00 14314.35 3345.79 40563.33 00:22:23.483 =================================================================================================================== 00:22:23.483 Total : 8921.38 34.85 0.00 0.00 14314.35 3345.79 40563.33 00:22:24.054 04:19:38 -- target/bdev_io_wait.sh@38 -- # wait 4050452 00:22:24.054 04:19:38 -- target/bdev_io_wait.sh@39 -- # wait 4050455 00:22:24.054 04:19:38 -- target/bdev_io_wait.sh@40 -- # wait 4050457 00:22:24.054 04:19:38 -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:24.054 04:19:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:24.054 04:19:38 -- common/autotest_common.sh@10 -- # set +x 00:22:24.054 04:19:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:24.054 04:19:38 -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:22:24.054 04:19:38 -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:22:24.054 04:19:38 -- nvmf/common.sh@476 -- # nvmfcleanup 00:22:24.054 04:19:38 -- nvmf/common.sh@116 -- # sync 00:22:24.054 04:19:38 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:22:24.054 04:19:38 -- nvmf/common.sh@119 -- # set +e 00:22:24.054 04:19:38 -- nvmf/common.sh@120 -- # for i in {1..20} 00:22:24.054 04:19:38 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:22:24.054 rmmod nvme_tcp 00:22:24.054 rmmod nvme_fabrics 00:22:24.054 rmmod nvme_keyring 00:22:24.054 04:19:38 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:22:24.054 04:19:38 -- nvmf/common.sh@123 -- # set -e 00:22:24.054 04:19:38 -- nvmf/common.sh@124 -- # return 0 00:22:24.054 04:19:38 -- nvmf/common.sh@477 -- # '[' -n 4050387 ']' 00:22:24.054 04:19:38 -- nvmf/common.sh@478 -- # killprocess 4050387 00:22:24.054 04:19:38 -- common/autotest_common.sh@926 -- # '[' -z 4050387 ']' 00:22:24.054 04:19:38 -- common/autotest_common.sh@930 -- # kill -0 4050387 00:22:24.054 04:19:38 -- common/autotest_common.sh@931 -- # uname 00:22:24.054 04:19:38 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:22:24.054 04:19:38 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 4050387 00:22:24.055 04:19:38 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:22:24.055 04:19:38 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:22:24.055 04:19:38 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 4050387' 00:22:24.055 killing process with pid 4050387 00:22:24.055 04:19:38 -- common/autotest_common.sh@945 -- # kill 4050387 00:22:24.055 04:19:38 -- common/autotest_common.sh@950 -- # wait 4050387 00:22:24.623 04:19:38 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:22:24.623 04:19:38 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:22:24.623 04:19:38 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:22:24.623 04:19:38 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:24.623 04:19:38 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:22:24.623 04:19:38 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:24.623 04:19:38 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:24.623 04:19:38 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:26.530 04:19:41 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:22:26.530 00:22:26.530 real 0m11.101s 00:22:26.530 user 0m23.049s 00:22:26.530 sys 0m5.422s 00:22:26.530 04:19:41 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:26.530 04:19:41 -- common/autotest_common.sh@10 -- # set +x 00:22:26.530 ************************************ 00:22:26.530 END TEST nvmf_bdev_io_wait 00:22:26.530 ************************************ 00:22:26.530 04:19:41 -- nvmf/nvmf.sh@50 -- # run_test nvmf_queue_depth /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:22:26.530 04:19:41 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:22:26.530 04:19:41 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:22:26.530 04:19:41 -- common/autotest_common.sh@10 -- # set +x 00:22:26.530 ************************************ 00:22:26.530 START TEST nvmf_queue_depth 00:22:26.530 ************************************ 00:22:26.530 04:19:41 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:22:26.789 * Looking for test storage... 00:22:26.789 * Found test storage at /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target 00:22:26.789 04:19:41 -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/common.sh 00:22:26.789 04:19:41 -- nvmf/common.sh@7 -- # uname -s 00:22:26.789 04:19:41 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:26.789 04:19:41 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:26.789 04:19:41 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:26.789 04:19:41 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:26.789 04:19:41 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:26.789 04:19:41 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:26.789 04:19:41 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:26.789 04:19:41 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:26.789 04:19:41 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:26.789 04:19:41 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:26.789 04:19:41 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80ef6226-405e-ee11-906e-a4bf01973fda 00:22:26.789 04:19:41 -- nvmf/common.sh@18 -- # NVME_HOSTID=80ef6226-405e-ee11-906e-a4bf01973fda 00:22:26.789 04:19:41 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:26.789 04:19:41 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:26.789 04:19:41 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:22:26.789 04:19:41 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/common.sh 00:22:26.789 04:19:41 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:26.789 04:19:41 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:26.789 04:19:41 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:26.789 04:19:41 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:26.789 04:19:41 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:26.789 04:19:41 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:26.789 04:19:41 -- paths/export.sh@5 -- # export PATH 00:22:26.789 04:19:41 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:26.789 04:19:41 -- nvmf/common.sh@46 -- # : 0 00:22:26.789 04:19:41 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:22:26.789 04:19:41 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:22:26.789 04:19:41 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:22:26.789 04:19:41 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:26.789 04:19:41 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:26.789 04:19:41 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:22:26.789 04:19:41 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:22:26.789 04:19:41 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:22:26.789 04:19:41 -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:22:26.789 04:19:41 -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:22:26.789 04:19:41 -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:26.789 04:19:41 -- target/queue_depth.sh@19 -- # nvmftestinit 00:22:26.789 04:19:41 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:22:26.789 04:19:41 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:26.789 04:19:41 -- nvmf/common.sh@436 -- # prepare_net_devs 00:22:26.789 04:19:41 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:22:26.789 04:19:41 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:22:26.789 04:19:41 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:26.789 04:19:41 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:26.789 04:19:41 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:26.789 04:19:41 -- nvmf/common.sh@402 -- # [[ phy-fallback != virt ]] 00:22:26.789 04:19:41 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:22:26.789 04:19:41 -- nvmf/common.sh@284 -- # xtrace_disable 00:22:26.789 04:19:41 -- common/autotest_common.sh@10 -- # set +x 00:22:32.101 04:19:45 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:22:32.101 04:19:45 -- nvmf/common.sh@290 -- # pci_devs=() 00:22:32.101 04:19:45 -- nvmf/common.sh@290 -- # local -a pci_devs 00:22:32.101 04:19:45 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:22:32.101 04:19:45 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:22:32.101 04:19:45 -- nvmf/common.sh@292 -- # pci_drivers=() 00:22:32.101 04:19:45 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:22:32.101 04:19:45 -- nvmf/common.sh@294 -- # net_devs=() 00:22:32.101 04:19:45 -- nvmf/common.sh@294 -- # local -ga net_devs 00:22:32.101 04:19:45 -- nvmf/common.sh@295 -- # e810=() 00:22:32.101 04:19:45 -- nvmf/common.sh@295 -- # local -ga e810 00:22:32.101 04:19:45 -- nvmf/common.sh@296 -- # x722=() 00:22:32.101 04:19:45 -- nvmf/common.sh@296 -- # local -ga x722 00:22:32.101 04:19:45 -- nvmf/common.sh@297 -- # mlx=() 00:22:32.101 04:19:45 -- nvmf/common.sh@297 -- # local -ga mlx 00:22:32.101 04:19:45 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:32.101 04:19:45 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:32.101 04:19:45 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:32.101 04:19:45 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:32.101 04:19:45 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:32.101 04:19:45 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:32.101 04:19:45 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:32.101 04:19:45 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:32.101 04:19:45 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:32.101 04:19:45 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:32.101 04:19:45 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:32.101 04:19:45 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:22:32.101 04:19:45 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:22:32.101 04:19:45 -- nvmf/common.sh@326 -- # [[ '' == mlx5 ]] 00:22:32.101 04:19:45 -- nvmf/common.sh@328 -- # [[ '' == e810 ]] 00:22:32.101 04:19:45 -- nvmf/common.sh@330 -- # [[ '' == x722 ]] 00:22:32.101 04:19:45 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:22:32.101 04:19:45 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:22:32.101 04:19:45 -- nvmf/common.sh@340 -- # echo 'Found 0000:27:00.0 (0x8086 - 0x159b)' 00:22:32.101 Found 0000:27:00.0 (0x8086 - 0x159b) 00:22:32.101 04:19:45 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:22:32.102 04:19:45 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:22:32.102 04:19:45 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:32.102 04:19:45 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:32.102 04:19:45 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:22:32.102 04:19:45 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:22:32.102 04:19:45 -- nvmf/common.sh@340 -- # echo 'Found 0000:27:00.1 (0x8086 - 0x159b)' 00:22:32.102 Found 0000:27:00.1 (0x8086 - 0x159b) 00:22:32.102 04:19:45 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:22:32.102 04:19:45 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:22:32.102 04:19:45 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:32.102 04:19:45 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:32.102 04:19:45 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:22:32.102 04:19:45 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:22:32.102 04:19:45 -- nvmf/common.sh@371 -- # [[ '' == e810 ]] 00:22:32.102 04:19:45 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:22:32.102 04:19:45 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:32.102 04:19:45 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:22:32.102 04:19:45 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:32.102 04:19:45 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:27:00.0: cvl_0_0' 00:22:32.102 Found net devices under 0000:27:00.0: cvl_0_0 00:22:32.102 04:19:45 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:22:32.102 04:19:45 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:22:32.102 04:19:45 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:32.102 04:19:45 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:22:32.102 04:19:45 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:32.102 04:19:45 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:27:00.1: cvl_0_1' 00:22:32.102 Found net devices under 0000:27:00.1: cvl_0_1 00:22:32.102 04:19:45 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:22:32.102 04:19:45 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:22:32.102 04:19:45 -- nvmf/common.sh@402 -- # is_hw=yes 00:22:32.102 04:19:45 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:22:32.102 04:19:45 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:22:32.102 04:19:45 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:22:32.102 04:19:45 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:32.102 04:19:45 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:32.102 04:19:45 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:32.102 04:19:45 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:22:32.102 04:19:45 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:32.102 04:19:45 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:32.102 04:19:45 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:22:32.102 04:19:45 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:32.102 04:19:45 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:32.102 04:19:45 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:22:32.102 04:19:45 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:22:32.102 04:19:45 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:22:32.102 04:19:45 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:32.102 04:19:46 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:32.102 04:19:46 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:32.102 04:19:46 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:22:32.102 04:19:46 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:32.102 04:19:46 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:32.102 04:19:46 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:32.102 04:19:46 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:22:32.102 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:32.102 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.246 ms 00:22:32.102 00:22:32.102 --- 10.0.0.2 ping statistics --- 00:22:32.102 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:32.102 rtt min/avg/max/mdev = 0.246/0.246/0.246/0.000 ms 00:22:32.102 04:19:46 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:32.102 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:32.102 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.248 ms 00:22:32.102 00:22:32.102 --- 10.0.0.1 ping statistics --- 00:22:32.102 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:32.102 rtt min/avg/max/mdev = 0.248/0.248/0.248/0.000 ms 00:22:32.102 04:19:46 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:32.102 04:19:46 -- nvmf/common.sh@410 -- # return 0 00:22:32.102 04:19:46 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:22:32.102 04:19:46 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:32.102 04:19:46 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:22:32.102 04:19:46 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:22:32.102 04:19:46 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:32.102 04:19:46 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:22:32.102 04:19:46 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:22:32.102 04:19:46 -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:22:32.102 04:19:46 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:22:32.102 04:19:46 -- common/autotest_common.sh@712 -- # xtrace_disable 00:22:32.102 04:19:46 -- common/autotest_common.sh@10 -- # set +x 00:22:32.102 04:19:46 -- nvmf/common.sh@469 -- # nvmfpid=4054951 00:22:32.102 04:19:46 -- nvmf/common.sh@470 -- # waitforlisten 4054951 00:22:32.102 04:19:46 -- common/autotest_common.sh@819 -- # '[' -z 4054951 ']' 00:22:32.102 04:19:46 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:32.102 04:19:46 -- common/autotest_common.sh@824 -- # local max_retries=100 00:22:32.102 04:19:46 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:32.102 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:32.102 04:19:46 -- common/autotest_common.sh@828 -- # xtrace_disable 00:22:32.102 04:19:46 -- common/autotest_common.sh@10 -- # set +x 00:22:32.102 04:19:46 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:22:32.102 [2024-05-14 04:19:46.313893] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:22:32.102 [2024-05-14 04:19:46.314025] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:32.102 EAL: No free 2048 kB hugepages reported on node 1 00:22:32.102 [2024-05-14 04:19:46.453486] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:32.102 [2024-05-14 04:19:46.551458] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:22:32.102 [2024-05-14 04:19:46.551652] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:32.102 [2024-05-14 04:19:46.551667] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:32.102 [2024-05-14 04:19:46.551677] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:32.102 [2024-05-14 04:19:46.551717] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:22:32.672 04:19:47 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:22:32.672 04:19:47 -- common/autotest_common.sh@852 -- # return 0 00:22:32.672 04:19:47 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:22:32.672 04:19:47 -- common/autotest_common.sh@718 -- # xtrace_disable 00:22:32.672 04:19:47 -- common/autotest_common.sh@10 -- # set +x 00:22:32.672 04:19:47 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:32.672 04:19:47 -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:32.672 04:19:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:32.672 04:19:47 -- common/autotest_common.sh@10 -- # set +x 00:22:32.672 [2024-05-14 04:19:47.068849] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:32.672 04:19:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:32.672 04:19:47 -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:22:32.672 04:19:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:32.672 04:19:47 -- common/autotest_common.sh@10 -- # set +x 00:22:32.672 Malloc0 00:22:32.672 04:19:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:32.672 04:19:47 -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:22:32.672 04:19:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:32.672 04:19:47 -- common/autotest_common.sh@10 -- # set +x 00:22:32.672 04:19:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:32.672 04:19:47 -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:22:32.672 04:19:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:32.672 04:19:47 -- common/autotest_common.sh@10 -- # set +x 00:22:32.672 04:19:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:32.672 04:19:47 -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:32.672 04:19:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:32.672 04:19:47 -- common/autotest_common.sh@10 -- # set +x 00:22:32.672 [2024-05-14 04:19:47.157570] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:32.672 04:19:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:32.672 04:19:47 -- target/queue_depth.sh@30 -- # bdevperf_pid=4055157 00:22:32.672 04:19:47 -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:32.672 04:19:47 -- target/queue_depth.sh@33 -- # waitforlisten 4055157 /var/tmp/bdevperf.sock 00:22:32.672 04:19:47 -- common/autotest_common.sh@819 -- # '[' -z 4055157 ']' 00:22:32.672 04:19:47 -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:22:32.672 04:19:47 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:32.672 04:19:47 -- common/autotest_common.sh@824 -- # local max_retries=100 00:22:32.672 04:19:47 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:32.672 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:32.672 04:19:47 -- common/autotest_common.sh@828 -- # xtrace_disable 00:22:32.672 04:19:47 -- common/autotest_common.sh@10 -- # set +x 00:22:32.672 [2024-05-14 04:19:47.233998] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:22:32.672 [2024-05-14 04:19:47.234108] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4055157 ] 00:22:32.933 EAL: No free 2048 kB hugepages reported on node 1 00:22:32.933 [2024-05-14 04:19:47.351766] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:32.933 [2024-05-14 04:19:47.442621] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:22:33.505 04:19:47 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:22:33.505 04:19:47 -- common/autotest_common.sh@852 -- # return 0 00:22:33.505 04:19:47 -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:22:33.505 04:19:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:33.505 04:19:47 -- common/autotest_common.sh@10 -- # set +x 00:22:33.765 NVMe0n1 00:22:33.766 04:19:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:33.766 04:19:48 -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:22:33.766 Running I/O for 10 seconds... 00:22:43.743 00:22:43.743 Latency(us) 00:22:43.743 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:43.743 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:22:43.743 Verification LBA range: start 0x0 length 0x4000 00:22:43.743 NVMe0n1 : 10.05 18204.40 71.11 0.00 0.00 56087.31 11037.64 47185.92 00:22:43.743 =================================================================================================================== 00:22:43.743 Total : 18204.40 71.11 0.00 0.00 56087.31 11037.64 47185.92 00:22:43.743 0 00:22:43.743 04:19:58 -- target/queue_depth.sh@39 -- # killprocess 4055157 00:22:43.743 04:19:58 -- common/autotest_common.sh@926 -- # '[' -z 4055157 ']' 00:22:43.744 04:19:58 -- common/autotest_common.sh@930 -- # kill -0 4055157 00:22:43.744 04:19:58 -- common/autotest_common.sh@931 -- # uname 00:22:43.744 04:19:58 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:22:43.744 04:19:58 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 4055157 00:22:43.744 04:19:58 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:22:43.744 04:19:58 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:22:43.744 04:19:58 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 4055157' 00:22:43.744 killing process with pid 4055157 00:22:43.744 04:19:58 -- common/autotest_common.sh@945 -- # kill 4055157 00:22:43.744 Received shutdown signal, test time was about 10.000000 seconds 00:22:43.744 00:22:43.744 Latency(us) 00:22:43.744 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:43.744 =================================================================================================================== 00:22:43.744 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:43.744 04:19:58 -- common/autotest_common.sh@950 -- # wait 4055157 00:22:44.311 04:19:58 -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:22:44.311 04:19:58 -- target/queue_depth.sh@43 -- # nvmftestfini 00:22:44.311 04:19:58 -- nvmf/common.sh@476 -- # nvmfcleanup 00:22:44.311 04:19:58 -- nvmf/common.sh@116 -- # sync 00:22:44.311 04:19:58 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:22:44.311 04:19:58 -- nvmf/common.sh@119 -- # set +e 00:22:44.311 04:19:58 -- nvmf/common.sh@120 -- # for i in {1..20} 00:22:44.311 04:19:58 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:22:44.311 rmmod nvme_tcp 00:22:44.311 rmmod nvme_fabrics 00:22:44.311 rmmod nvme_keyring 00:22:44.311 04:19:58 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:22:44.311 04:19:58 -- nvmf/common.sh@123 -- # set -e 00:22:44.311 04:19:58 -- nvmf/common.sh@124 -- # return 0 00:22:44.311 04:19:58 -- nvmf/common.sh@477 -- # '[' -n 4054951 ']' 00:22:44.311 04:19:58 -- nvmf/common.sh@478 -- # killprocess 4054951 00:22:44.311 04:19:58 -- common/autotest_common.sh@926 -- # '[' -z 4054951 ']' 00:22:44.311 04:19:58 -- common/autotest_common.sh@930 -- # kill -0 4054951 00:22:44.311 04:19:58 -- common/autotest_common.sh@931 -- # uname 00:22:44.311 04:19:58 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:22:44.311 04:19:58 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 4054951 00:22:44.311 04:19:58 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:22:44.311 04:19:58 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:22:44.311 04:19:58 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 4054951' 00:22:44.311 killing process with pid 4054951 00:22:44.311 04:19:58 -- common/autotest_common.sh@945 -- # kill 4054951 00:22:44.311 04:19:58 -- common/autotest_common.sh@950 -- # wait 4054951 00:22:44.879 04:19:59 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:22:44.879 04:19:59 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:22:44.879 04:19:59 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:22:44.879 04:19:59 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:44.879 04:19:59 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:22:44.879 04:19:59 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:44.879 04:19:59 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:44.879 04:19:59 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:46.782 04:20:01 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:22:46.782 00:22:46.782 real 0m20.247s 00:22:46.782 user 0m25.238s 00:22:46.782 sys 0m5.166s 00:22:46.782 04:20:01 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:46.782 04:20:01 -- common/autotest_common.sh@10 -- # set +x 00:22:46.782 ************************************ 00:22:46.782 END TEST nvmf_queue_depth 00:22:46.782 ************************************ 00:22:46.782 04:20:01 -- nvmf/nvmf.sh@51 -- # run_test nvmf_multipath /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:22:46.782 04:20:01 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:22:46.782 04:20:01 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:22:46.782 04:20:01 -- common/autotest_common.sh@10 -- # set +x 00:22:46.782 ************************************ 00:22:46.782 START TEST nvmf_multipath 00:22:46.782 ************************************ 00:22:46.782 04:20:01 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:22:47.042 * Looking for test storage... 00:22:47.042 * Found test storage at /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target 00:22:47.042 04:20:01 -- target/multipath.sh@9 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/common.sh 00:22:47.042 04:20:01 -- nvmf/common.sh@7 -- # uname -s 00:22:47.042 04:20:01 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:47.042 04:20:01 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:47.042 04:20:01 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:47.042 04:20:01 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:47.042 04:20:01 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:47.042 04:20:01 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:47.042 04:20:01 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:47.042 04:20:01 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:47.042 04:20:01 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:47.042 04:20:01 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:47.042 04:20:01 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80ef6226-405e-ee11-906e-a4bf01973fda 00:22:47.042 04:20:01 -- nvmf/common.sh@18 -- # NVME_HOSTID=80ef6226-405e-ee11-906e-a4bf01973fda 00:22:47.042 04:20:01 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:47.042 04:20:01 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:47.042 04:20:01 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:22:47.042 04:20:01 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/common.sh 00:22:47.042 04:20:01 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:47.042 04:20:01 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:47.042 04:20:01 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:47.042 04:20:01 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:47.042 04:20:01 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:47.042 04:20:01 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:47.042 04:20:01 -- paths/export.sh@5 -- # export PATH 00:22:47.042 04:20:01 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:47.042 04:20:01 -- nvmf/common.sh@46 -- # : 0 00:22:47.042 04:20:01 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:22:47.042 04:20:01 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:22:47.042 04:20:01 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:22:47.042 04:20:01 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:47.042 04:20:01 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:47.042 04:20:01 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:22:47.042 04:20:01 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:22:47.042 04:20:01 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:22:47.042 04:20:01 -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:22:47.042 04:20:01 -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:22:47.042 04:20:01 -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:22:47.042 04:20:01 -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py 00:22:47.042 04:20:01 -- target/multipath.sh@43 -- # nvmftestinit 00:22:47.042 04:20:01 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:22:47.042 04:20:01 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:47.042 04:20:01 -- nvmf/common.sh@436 -- # prepare_net_devs 00:22:47.042 04:20:01 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:22:47.042 04:20:01 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:22:47.043 04:20:01 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:47.043 04:20:01 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:47.043 04:20:01 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:47.043 04:20:01 -- nvmf/common.sh@402 -- # [[ phy-fallback != virt ]] 00:22:47.043 04:20:01 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:22:47.043 04:20:01 -- nvmf/common.sh@284 -- # xtrace_disable 00:22:47.043 04:20:01 -- common/autotest_common.sh@10 -- # set +x 00:22:52.312 04:20:06 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:22:52.312 04:20:06 -- nvmf/common.sh@290 -- # pci_devs=() 00:22:52.312 04:20:06 -- nvmf/common.sh@290 -- # local -a pci_devs 00:22:52.312 04:20:06 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:22:52.312 04:20:06 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:22:52.312 04:20:06 -- nvmf/common.sh@292 -- # pci_drivers=() 00:22:52.312 04:20:06 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:22:52.312 04:20:06 -- nvmf/common.sh@294 -- # net_devs=() 00:22:52.312 04:20:06 -- nvmf/common.sh@294 -- # local -ga net_devs 00:22:52.312 04:20:06 -- nvmf/common.sh@295 -- # e810=() 00:22:52.312 04:20:06 -- nvmf/common.sh@295 -- # local -ga e810 00:22:52.312 04:20:06 -- nvmf/common.sh@296 -- # x722=() 00:22:52.312 04:20:06 -- nvmf/common.sh@296 -- # local -ga x722 00:22:52.312 04:20:06 -- nvmf/common.sh@297 -- # mlx=() 00:22:52.312 04:20:06 -- nvmf/common.sh@297 -- # local -ga mlx 00:22:52.312 04:20:06 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:52.312 04:20:06 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:52.312 04:20:06 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:52.312 04:20:06 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:52.312 04:20:06 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:52.312 04:20:06 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:52.312 04:20:06 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:52.312 04:20:06 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:52.312 04:20:06 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:52.312 04:20:06 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:52.312 04:20:06 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:52.312 04:20:06 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:22:52.312 04:20:06 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:22:52.312 04:20:06 -- nvmf/common.sh@326 -- # [[ '' == mlx5 ]] 00:22:52.312 04:20:06 -- nvmf/common.sh@328 -- # [[ '' == e810 ]] 00:22:52.312 04:20:06 -- nvmf/common.sh@330 -- # [[ '' == x722 ]] 00:22:52.312 04:20:06 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:22:52.312 04:20:06 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:22:52.312 04:20:06 -- nvmf/common.sh@340 -- # echo 'Found 0000:27:00.0 (0x8086 - 0x159b)' 00:22:52.312 Found 0000:27:00.0 (0x8086 - 0x159b) 00:22:52.312 04:20:06 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:22:52.312 04:20:06 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:22:52.312 04:20:06 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:52.312 04:20:06 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:52.312 04:20:06 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:22:52.312 04:20:06 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:22:52.312 04:20:06 -- nvmf/common.sh@340 -- # echo 'Found 0000:27:00.1 (0x8086 - 0x159b)' 00:22:52.312 Found 0000:27:00.1 (0x8086 - 0x159b) 00:22:52.312 04:20:06 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:22:52.312 04:20:06 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:22:52.312 04:20:06 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:52.312 04:20:06 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:52.312 04:20:06 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:22:52.312 04:20:06 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:22:52.312 04:20:06 -- nvmf/common.sh@371 -- # [[ '' == e810 ]] 00:22:52.312 04:20:06 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:22:52.313 04:20:06 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:52.313 04:20:06 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:22:52.313 04:20:06 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:52.313 04:20:06 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:27:00.0: cvl_0_0' 00:22:52.313 Found net devices under 0000:27:00.0: cvl_0_0 00:22:52.313 04:20:06 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:22:52.313 04:20:06 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:22:52.313 04:20:06 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:52.313 04:20:06 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:22:52.313 04:20:06 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:52.313 04:20:06 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:27:00.1: cvl_0_1' 00:22:52.313 Found net devices under 0000:27:00.1: cvl_0_1 00:22:52.313 04:20:06 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:22:52.313 04:20:06 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:22:52.313 04:20:06 -- nvmf/common.sh@402 -- # is_hw=yes 00:22:52.313 04:20:06 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:22:52.313 04:20:06 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:22:52.313 04:20:06 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:22:52.313 04:20:06 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:52.313 04:20:06 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:52.313 04:20:06 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:52.313 04:20:06 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:22:52.313 04:20:06 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:52.313 04:20:06 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:52.313 04:20:06 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:22:52.313 04:20:06 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:52.313 04:20:06 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:52.313 04:20:06 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:22:52.313 04:20:06 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:22:52.313 04:20:06 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:22:52.313 04:20:06 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:52.313 04:20:06 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:52.313 04:20:06 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:52.313 04:20:06 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:22:52.313 04:20:06 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:52.313 04:20:06 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:52.313 04:20:06 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:52.313 04:20:06 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:22:52.313 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:52.313 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.197 ms 00:22:52.313 00:22:52.313 --- 10.0.0.2 ping statistics --- 00:22:52.313 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:52.313 rtt min/avg/max/mdev = 0.197/0.197/0.197/0.000 ms 00:22:52.313 04:20:06 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:52.313 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:52.313 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.070 ms 00:22:52.313 00:22:52.313 --- 10.0.0.1 ping statistics --- 00:22:52.313 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:52.313 rtt min/avg/max/mdev = 0.070/0.070/0.070/0.000 ms 00:22:52.313 04:20:06 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:52.313 04:20:06 -- nvmf/common.sh@410 -- # return 0 00:22:52.313 04:20:06 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:22:52.313 04:20:06 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:52.313 04:20:06 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:22:52.313 04:20:06 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:22:52.313 04:20:06 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:52.313 04:20:06 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:22:52.313 04:20:06 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:22:52.573 04:20:06 -- target/multipath.sh@45 -- # '[' -z ']' 00:22:52.573 04:20:06 -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:22:52.573 only one NIC for nvmf test 00:22:52.573 04:20:06 -- target/multipath.sh@47 -- # nvmftestfini 00:22:52.573 04:20:06 -- nvmf/common.sh@476 -- # nvmfcleanup 00:22:52.573 04:20:06 -- nvmf/common.sh@116 -- # sync 00:22:52.573 04:20:06 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:22:52.573 04:20:06 -- nvmf/common.sh@119 -- # set +e 00:22:52.573 04:20:06 -- nvmf/common.sh@120 -- # for i in {1..20} 00:22:52.573 04:20:06 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:22:52.573 rmmod nvme_tcp 00:22:52.573 rmmod nvme_fabrics 00:22:52.573 rmmod nvme_keyring 00:22:52.573 04:20:06 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:22:52.573 04:20:06 -- nvmf/common.sh@123 -- # set -e 00:22:52.573 04:20:06 -- nvmf/common.sh@124 -- # return 0 00:22:52.573 04:20:06 -- nvmf/common.sh@477 -- # '[' -n '' ']' 00:22:52.573 04:20:06 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:22:52.573 04:20:06 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:22:52.573 04:20:06 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:22:52.573 04:20:06 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:52.573 04:20:06 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:22:52.573 04:20:06 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:52.573 04:20:06 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:52.573 04:20:06 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:54.481 04:20:09 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:22:54.481 04:20:09 -- target/multipath.sh@48 -- # exit 0 00:22:54.481 04:20:09 -- target/multipath.sh@1 -- # nvmftestfini 00:22:54.481 04:20:09 -- nvmf/common.sh@476 -- # nvmfcleanup 00:22:54.481 04:20:09 -- nvmf/common.sh@116 -- # sync 00:22:54.481 04:20:09 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:22:54.481 04:20:09 -- nvmf/common.sh@119 -- # set +e 00:22:54.481 04:20:09 -- nvmf/common.sh@120 -- # for i in {1..20} 00:22:54.481 04:20:09 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:22:54.481 04:20:09 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:22:54.481 04:20:09 -- nvmf/common.sh@123 -- # set -e 00:22:54.481 04:20:09 -- nvmf/common.sh@124 -- # return 0 00:22:54.481 04:20:09 -- nvmf/common.sh@477 -- # '[' -n '' ']' 00:22:54.481 04:20:09 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:22:54.481 04:20:09 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:22:54.481 04:20:09 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:22:54.481 04:20:09 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:54.481 04:20:09 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:22:54.481 04:20:09 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:54.481 04:20:09 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:54.481 04:20:09 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:54.481 04:20:09 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:22:54.481 00:22:54.481 real 0m7.666s 00:22:54.481 user 0m1.569s 00:22:54.481 sys 0m4.033s 00:22:54.481 04:20:09 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:54.481 04:20:09 -- common/autotest_common.sh@10 -- # set +x 00:22:54.481 ************************************ 00:22:54.481 END TEST nvmf_multipath 00:22:54.482 ************************************ 00:22:54.482 04:20:09 -- nvmf/nvmf.sh@52 -- # run_test nvmf_zcopy /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:22:54.774 04:20:09 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:22:54.774 04:20:09 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:22:54.774 04:20:09 -- common/autotest_common.sh@10 -- # set +x 00:22:54.774 ************************************ 00:22:54.774 START TEST nvmf_zcopy 00:22:54.774 ************************************ 00:22:54.774 04:20:09 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:22:54.774 * Looking for test storage... 00:22:54.774 * Found test storage at /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target 00:22:54.774 04:20:09 -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/common.sh 00:22:54.774 04:20:09 -- nvmf/common.sh@7 -- # uname -s 00:22:54.774 04:20:09 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:54.774 04:20:09 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:54.774 04:20:09 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:54.774 04:20:09 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:54.774 04:20:09 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:54.774 04:20:09 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:54.774 04:20:09 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:54.774 04:20:09 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:54.774 04:20:09 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:54.774 04:20:09 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:54.774 04:20:09 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80ef6226-405e-ee11-906e-a4bf01973fda 00:22:54.774 04:20:09 -- nvmf/common.sh@18 -- # NVME_HOSTID=80ef6226-405e-ee11-906e-a4bf01973fda 00:22:54.774 04:20:09 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:54.774 04:20:09 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:54.774 04:20:09 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:22:54.774 04:20:09 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/common.sh 00:22:54.774 04:20:09 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:54.774 04:20:09 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:54.774 04:20:09 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:54.774 04:20:09 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:54.774 04:20:09 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:54.774 04:20:09 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:54.774 04:20:09 -- paths/export.sh@5 -- # export PATH 00:22:54.774 04:20:09 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:54.774 04:20:09 -- nvmf/common.sh@46 -- # : 0 00:22:54.774 04:20:09 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:22:54.774 04:20:09 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:22:54.774 04:20:09 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:22:54.774 04:20:09 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:54.774 04:20:09 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:54.774 04:20:09 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:22:54.774 04:20:09 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:22:54.774 04:20:09 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:22:54.774 04:20:09 -- target/zcopy.sh@12 -- # nvmftestinit 00:22:54.774 04:20:09 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:22:54.774 04:20:09 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:54.774 04:20:09 -- nvmf/common.sh@436 -- # prepare_net_devs 00:22:54.774 04:20:09 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:22:54.774 04:20:09 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:22:54.774 04:20:09 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:54.774 04:20:09 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:54.774 04:20:09 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:54.774 04:20:09 -- nvmf/common.sh@402 -- # [[ phy-fallback != virt ]] 00:22:54.774 04:20:09 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:22:54.774 04:20:09 -- nvmf/common.sh@284 -- # xtrace_disable 00:22:54.774 04:20:09 -- common/autotest_common.sh@10 -- # set +x 00:23:01.340 04:20:14 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:23:01.340 04:20:14 -- nvmf/common.sh@290 -- # pci_devs=() 00:23:01.340 04:20:14 -- nvmf/common.sh@290 -- # local -a pci_devs 00:23:01.340 04:20:14 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:23:01.340 04:20:14 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:23:01.340 04:20:14 -- nvmf/common.sh@292 -- # pci_drivers=() 00:23:01.340 04:20:14 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:23:01.340 04:20:14 -- nvmf/common.sh@294 -- # net_devs=() 00:23:01.340 04:20:14 -- nvmf/common.sh@294 -- # local -ga net_devs 00:23:01.340 04:20:14 -- nvmf/common.sh@295 -- # e810=() 00:23:01.340 04:20:14 -- nvmf/common.sh@295 -- # local -ga e810 00:23:01.340 04:20:14 -- nvmf/common.sh@296 -- # x722=() 00:23:01.340 04:20:14 -- nvmf/common.sh@296 -- # local -ga x722 00:23:01.340 04:20:14 -- nvmf/common.sh@297 -- # mlx=() 00:23:01.340 04:20:14 -- nvmf/common.sh@297 -- # local -ga mlx 00:23:01.340 04:20:14 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:01.340 04:20:14 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:01.340 04:20:14 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:01.340 04:20:14 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:01.340 04:20:14 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:01.340 04:20:14 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:01.340 04:20:14 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:01.340 04:20:14 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:01.340 04:20:14 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:01.340 04:20:14 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:01.340 04:20:14 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:01.340 04:20:14 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:23:01.340 04:20:14 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:23:01.340 04:20:14 -- nvmf/common.sh@326 -- # [[ '' == mlx5 ]] 00:23:01.340 04:20:14 -- nvmf/common.sh@328 -- # [[ '' == e810 ]] 00:23:01.340 04:20:14 -- nvmf/common.sh@330 -- # [[ '' == x722 ]] 00:23:01.340 04:20:14 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:23:01.340 04:20:14 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:23:01.340 04:20:14 -- nvmf/common.sh@340 -- # echo 'Found 0000:27:00.0 (0x8086 - 0x159b)' 00:23:01.340 Found 0000:27:00.0 (0x8086 - 0x159b) 00:23:01.340 04:20:14 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:23:01.340 04:20:14 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:23:01.340 04:20:14 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:01.340 04:20:14 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:01.340 04:20:14 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:23:01.340 04:20:14 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:23:01.340 04:20:14 -- nvmf/common.sh@340 -- # echo 'Found 0000:27:00.1 (0x8086 - 0x159b)' 00:23:01.340 Found 0000:27:00.1 (0x8086 - 0x159b) 00:23:01.340 04:20:14 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:23:01.340 04:20:14 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:23:01.340 04:20:14 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:01.340 04:20:14 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:01.340 04:20:14 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:23:01.340 04:20:14 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:23:01.340 04:20:14 -- nvmf/common.sh@371 -- # [[ '' == e810 ]] 00:23:01.340 04:20:14 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:23:01.340 04:20:14 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:01.340 04:20:14 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:23:01.340 04:20:14 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:01.340 04:20:14 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:27:00.0: cvl_0_0' 00:23:01.340 Found net devices under 0000:27:00.0: cvl_0_0 00:23:01.340 04:20:14 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:23:01.340 04:20:14 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:23:01.340 04:20:14 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:01.340 04:20:14 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:23:01.340 04:20:14 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:01.340 04:20:14 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:27:00.1: cvl_0_1' 00:23:01.340 Found net devices under 0000:27:00.1: cvl_0_1 00:23:01.340 04:20:14 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:23:01.340 04:20:14 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:23:01.340 04:20:14 -- nvmf/common.sh@402 -- # is_hw=yes 00:23:01.340 04:20:14 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:23:01.340 04:20:14 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:23:01.340 04:20:14 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:23:01.340 04:20:14 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:01.340 04:20:14 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:01.340 04:20:14 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:01.340 04:20:14 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:23:01.340 04:20:14 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:01.340 04:20:14 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:01.340 04:20:14 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:23:01.340 04:20:14 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:01.340 04:20:14 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:01.340 04:20:14 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:23:01.340 04:20:14 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:23:01.340 04:20:14 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:23:01.340 04:20:14 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:01.340 04:20:14 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:01.340 04:20:14 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:01.340 04:20:14 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:23:01.340 04:20:14 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:01.340 04:20:14 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:01.340 04:20:14 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:01.340 04:20:14 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:23:01.340 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:01.340 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.390 ms 00:23:01.340 00:23:01.340 --- 10.0.0.2 ping statistics --- 00:23:01.340 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:01.340 rtt min/avg/max/mdev = 0.390/0.390/0.390/0.000 ms 00:23:01.340 04:20:14 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:01.340 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:01.340 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.337 ms 00:23:01.340 00:23:01.340 --- 10.0.0.1 ping statistics --- 00:23:01.340 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:01.340 rtt min/avg/max/mdev = 0.337/0.337/0.337/0.000 ms 00:23:01.341 04:20:14 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:01.341 04:20:14 -- nvmf/common.sh@410 -- # return 0 00:23:01.341 04:20:14 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:23:01.341 04:20:14 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:01.341 04:20:14 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:23:01.341 04:20:14 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:23:01.341 04:20:14 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:01.341 04:20:14 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:23:01.341 04:20:14 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:23:01.341 04:20:14 -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:23:01.341 04:20:14 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:23:01.341 04:20:14 -- common/autotest_common.sh@712 -- # xtrace_disable 00:23:01.341 04:20:14 -- common/autotest_common.sh@10 -- # set +x 00:23:01.341 04:20:14 -- nvmf/common.sh@469 -- # nvmfpid=4065456 00:23:01.341 04:20:14 -- nvmf/common.sh@470 -- # waitforlisten 4065456 00:23:01.341 04:20:14 -- common/autotest_common.sh@819 -- # '[' -z 4065456 ']' 00:23:01.341 04:20:14 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:01.341 04:20:14 -- common/autotest_common.sh@824 -- # local max_retries=100 00:23:01.341 04:20:14 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:01.341 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:01.341 04:20:14 -- common/autotest_common.sh@828 -- # xtrace_disable 00:23:01.341 04:20:14 -- common/autotest_common.sh@10 -- # set +x 00:23:01.341 04:20:14 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:23:01.341 [2024-05-14 04:20:15.006201] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:23:01.341 [2024-05-14 04:20:15.006304] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:01.341 EAL: No free 2048 kB hugepages reported on node 1 00:23:01.341 [2024-05-14 04:20:15.125621] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:01.341 [2024-05-14 04:20:15.222634] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:23:01.341 [2024-05-14 04:20:15.222809] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:01.341 [2024-05-14 04:20:15.222826] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:01.341 [2024-05-14 04:20:15.222834] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:01.341 [2024-05-14 04:20:15.222860] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:23:01.341 04:20:15 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:23:01.341 04:20:15 -- common/autotest_common.sh@852 -- # return 0 00:23:01.341 04:20:15 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:23:01.341 04:20:15 -- common/autotest_common.sh@718 -- # xtrace_disable 00:23:01.341 04:20:15 -- common/autotest_common.sh@10 -- # set +x 00:23:01.341 04:20:15 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:01.341 04:20:15 -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:23:01.341 04:20:15 -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:23:01.341 04:20:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:01.341 04:20:15 -- common/autotest_common.sh@10 -- # set +x 00:23:01.341 [2024-05-14 04:20:15.735469] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:01.341 04:20:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:01.341 04:20:15 -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:23:01.341 04:20:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:01.341 04:20:15 -- common/autotest_common.sh@10 -- # set +x 00:23:01.341 04:20:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:01.341 04:20:15 -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:01.341 04:20:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:01.341 04:20:15 -- common/autotest_common.sh@10 -- # set +x 00:23:01.341 [2024-05-14 04:20:15.751604] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:01.341 04:20:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:01.341 04:20:15 -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:23:01.341 04:20:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:01.341 04:20:15 -- common/autotest_common.sh@10 -- # set +x 00:23:01.341 04:20:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:01.341 04:20:15 -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:23:01.341 04:20:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:01.341 04:20:15 -- common/autotest_common.sh@10 -- # set +x 00:23:01.341 malloc0 00:23:01.341 04:20:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:01.341 04:20:15 -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:23:01.341 04:20:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:01.341 04:20:15 -- common/autotest_common.sh@10 -- # set +x 00:23:01.341 04:20:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:01.341 04:20:15 -- target/zcopy.sh@33 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:23:01.341 04:20:15 -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:23:01.341 04:20:15 -- nvmf/common.sh@520 -- # config=() 00:23:01.341 04:20:15 -- nvmf/common.sh@520 -- # local subsystem config 00:23:01.341 04:20:15 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:23:01.341 04:20:15 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:23:01.341 { 00:23:01.341 "params": { 00:23:01.341 "name": "Nvme$subsystem", 00:23:01.341 "trtype": "$TEST_TRANSPORT", 00:23:01.341 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:01.341 "adrfam": "ipv4", 00:23:01.341 "trsvcid": "$NVMF_PORT", 00:23:01.341 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:01.341 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:01.341 "hdgst": ${hdgst:-false}, 00:23:01.341 "ddgst": ${ddgst:-false} 00:23:01.341 }, 00:23:01.341 "method": "bdev_nvme_attach_controller" 00:23:01.341 } 00:23:01.341 EOF 00:23:01.341 )") 00:23:01.341 04:20:15 -- nvmf/common.sh@542 -- # cat 00:23:01.341 04:20:15 -- nvmf/common.sh@544 -- # jq . 00:23:01.341 04:20:15 -- nvmf/common.sh@545 -- # IFS=, 00:23:01.341 04:20:15 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:23:01.341 "params": { 00:23:01.341 "name": "Nvme1", 00:23:01.341 "trtype": "tcp", 00:23:01.341 "traddr": "10.0.0.2", 00:23:01.341 "adrfam": "ipv4", 00:23:01.341 "trsvcid": "4420", 00:23:01.341 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:01.341 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:01.341 "hdgst": false, 00:23:01.341 "ddgst": false 00:23:01.341 }, 00:23:01.341 "method": "bdev_nvme_attach_controller" 00:23:01.341 }' 00:23:01.341 [2024-05-14 04:20:15.873856] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:23:01.341 [2024-05-14 04:20:15.873959] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4065496 ] 00:23:01.599 EAL: No free 2048 kB hugepages reported on node 1 00:23:01.599 [2024-05-14 04:20:15.984829] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:01.600 [2024-05-14 04:20:16.077339] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:01.858 Running I/O for 10 seconds... 00:23:11.844 00:23:11.844 Latency(us) 00:23:11.844 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:11.844 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:23:11.844 Verification LBA range: start 0x0 length 0x1000 00:23:11.844 Nvme1n1 : 10.01 13309.73 103.98 0.00 0.00 9595.52 1552.17 15866.61 00:23:11.844 =================================================================================================================== 00:23:11.844 Total : 13309.73 103.98 0.00 0.00 9595.52 1552.17 15866.61 00:23:12.103 04:20:26 -- target/zcopy.sh@39 -- # perfpid=4067610 00:23:12.103 04:20:26 -- target/zcopy.sh@41 -- # xtrace_disable 00:23:12.103 04:20:26 -- common/autotest_common.sh@10 -- # set +x 00:23:12.103 04:20:26 -- target/zcopy.sh@37 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:23:12.103 04:20:26 -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:23:12.103 04:20:26 -- nvmf/common.sh@520 -- # config=() 00:23:12.103 04:20:26 -- nvmf/common.sh@520 -- # local subsystem config 00:23:12.103 04:20:26 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:23:12.103 04:20:26 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:23:12.103 { 00:23:12.103 "params": { 00:23:12.103 "name": "Nvme$subsystem", 00:23:12.103 "trtype": "$TEST_TRANSPORT", 00:23:12.103 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:12.103 "adrfam": "ipv4", 00:23:12.103 "trsvcid": "$NVMF_PORT", 00:23:12.103 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:12.103 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:12.103 "hdgst": ${hdgst:-false}, 00:23:12.103 "ddgst": ${ddgst:-false} 00:23:12.103 }, 00:23:12.103 "method": "bdev_nvme_attach_controller" 00:23:12.103 } 00:23:12.103 EOF 00:23:12.103 )") 00:23:12.103 04:20:26 -- nvmf/common.sh@542 -- # cat 00:23:12.103 [2024-05-14 04:20:26.678384] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:12.103 [2024-05-14 04:20:26.678429] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:12.103 04:20:26 -- nvmf/common.sh@544 -- # jq . 00:23:12.103 04:20:26 -- nvmf/common.sh@545 -- # IFS=, 00:23:12.103 04:20:26 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:23:12.103 "params": { 00:23:12.103 "name": "Nvme1", 00:23:12.103 "trtype": "tcp", 00:23:12.103 "traddr": "10.0.0.2", 00:23:12.103 "adrfam": "ipv4", 00:23:12.103 "trsvcid": "4420", 00:23:12.103 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:12.103 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:12.103 "hdgst": false, 00:23:12.103 "ddgst": false 00:23:12.103 }, 00:23:12.103 "method": "bdev_nvme_attach_controller" 00:23:12.103 }' 00:23:12.103 [2024-05-14 04:20:26.686323] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:12.103 [2024-05-14 04:20:26.686342] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:12.363 [2024-05-14 04:20:26.694299] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:12.363 [2024-05-14 04:20:26.694316] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:12.363 [2024-05-14 04:20:26.702308] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:12.363 [2024-05-14 04:20:26.702323] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:12.363 [2024-05-14 04:20:26.710304] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:12.363 [2024-05-14 04:20:26.710318] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:12.363 [2024-05-14 04:20:26.718296] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:12.363 [2024-05-14 04:20:26.718315] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:12.363 [2024-05-14 04:20:26.726307] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:12.363 [2024-05-14 04:20:26.726321] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:12.363 [2024-05-14 04:20:26.734310] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:12.363 [2024-05-14 04:20:26.734324] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:12.363 [2024-05-14 04:20:26.736640] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:23:12.363 [2024-05-14 04:20:26.736746] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4067610 ] 00:23:12.363 [2024-05-14 04:20:26.742317] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:12.363 [2024-05-14 04:20:26.742331] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:12.363 [2024-05-14 04:20:26.750315] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:12.363 [2024-05-14 04:20:26.750328] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:12.363 [2024-05-14 04:20:26.758310] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:12.363 [2024-05-14 04:20:26.758325] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:12.363 [2024-05-14 04:20:26.766322] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:12.363 [2024-05-14 04:20:26.766337] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:12.363 [2024-05-14 04:20:26.774325] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:12.363 [2024-05-14 04:20:26.774342] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:12.363 [2024-05-14 04:20:26.782319] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:12.363 [2024-05-14 04:20:26.782341] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:12.363 [2024-05-14 04:20:26.790327] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:12.363 [2024-05-14 04:20:26.790344] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:12.363 [2024-05-14 04:20:26.798328] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:12.363 [2024-05-14 04:20:26.798345] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:12.363 [2024-05-14 04:20:26.806322] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:12.363 [2024-05-14 04:20:26.806338] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:12.363 EAL: No free 2048 kB hugepages reported on node 1 00:23:12.363 [2024-05-14 04:20:26.814331] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:12.363 [2024-05-14 04:20:26.814345] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:12.363 [2024-05-14 04:20:26.822323] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:12.363 [2024-05-14 04:20:26.822337] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:12.363 [2024-05-14 04:20:26.830335] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:12.363 [2024-05-14 04:20:26.830348] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:12.363 [2024-05-14 04:20:26.838341] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:12.363 [2024-05-14 04:20:26.838356] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:12.363 [2024-05-14 04:20:26.846337] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:12.363 [2024-05-14 04:20:26.846353] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:12.363 [2024-05-14 04:20:26.852977] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:12.363 [2024-05-14 04:20:26.854345] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:12.363 [2024-05-14 04:20:26.854361] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:12.363 [2024-05-14 04:20:26.862364] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:12.363 [2024-05-14 04:20:26.862381] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:12.363 [2024-05-14 04:20:26.870349] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:12.363 [2024-05-14 04:20:26.870371] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:12.363 [2024-05-14 04:20:26.878353] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:12.363 [2024-05-14 04:20:26.878369] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:12.363 [2024-05-14 04:20:26.886347] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:12.363 [2024-05-14 04:20:26.886362] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:12.363 [2024-05-14 04:20:26.894357] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:12.363 [2024-05-14 04:20:26.894373] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:12.363 [2024-05-14 04:20:26.902362] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:12.363 [2024-05-14 04:20:26.902377] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:12.363 [2024-05-14 04:20:26.910355] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:12.363 [2024-05-14 04:20:26.910371] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:12.363 [2024-05-14 04:20:26.918363] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:12.363 [2024-05-14 04:20:26.918379] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:12.363 [2024-05-14 04:20:26.926364] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:12.363 [2024-05-14 04:20:26.926379] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:12.363 [2024-05-14 04:20:26.934376] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:12.363 [2024-05-14 04:20:26.934392] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:12.363 [2024-05-14 04:20:26.942367] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:12.363 [2024-05-14 04:20:26.942383] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:12.363 [2024-05-14 04:20:26.947709] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:12.622 [2024-05-14 04:20:26.950363] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:12.622 [2024-05-14 04:20:26.950381] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:12.622 [2024-05-14 04:20:26.958375] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:12.622 [2024-05-14 04:20:26.958392] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:12.622 [2024-05-14 04:20:26.966383] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:12.622 [2024-05-14 04:20:26.966400] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:12.622 [2024-05-14 04:20:26.974371] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:12.622 [2024-05-14 04:20:26.974388] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:12.622 [2024-05-14 04:20:26.982382] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:12.622 [2024-05-14 04:20:26.982399] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:12.622 [2024-05-14 04:20:26.990380] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:12.622 [2024-05-14 04:20:26.990396] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:12.622 [2024-05-14 04:20:26.998378] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:12.622 [2024-05-14 04:20:26.998396] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:12.622 [2024-05-14 04:20:27.006386] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:12.622 [2024-05-14 04:20:27.006404] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:12.623 [2024-05-14 04:20:27.014380] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:12.623 [2024-05-14 04:20:27.014396] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:12.623 [2024-05-14 04:20:27.022394] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:12.623 [2024-05-14 04:20:27.022411] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:12.623 [2024-05-14 04:20:27.030402] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:12.623 [2024-05-14 04:20:27.030417] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:12.623 [2024-05-14 04:20:27.038386] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:12.623 [2024-05-14 04:20:27.038401] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:12.623 [2024-05-14 04:20:27.046403] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:12.623 [2024-05-14 04:20:27.046417] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:12.623 [2024-05-14 04:20:27.054396] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:12.623 [2024-05-14 04:20:27.054411] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:12.623 [2024-05-14 04:20:27.062394] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:12.623 [2024-05-14 04:20:27.062408] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:12.623 [2024-05-14 04:20:27.070403] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:12.623 [2024-05-14 04:20:27.070418] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:12.623 [2024-05-14 04:20:27.078399] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:12.623 [2024-05-14 04:20:27.078413] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:12.623 [2024-05-14 04:20:27.086412] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:12.623 [2024-05-14 04:20:27.086426] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:12.623 [2024-05-14 04:20:27.094435] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:12.623 [2024-05-14 04:20:27.094460] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:12.623 [2024-05-14 04:20:27.102418] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:12.623 [2024-05-14 04:20:27.102438] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:12.623 [2024-05-14 04:20:27.110431] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:12.623 [2024-05-14 04:20:27.110450] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:12.623 [2024-05-14 04:20:27.118441] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:12.623 [2024-05-14 04:20:27.118462] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:12.623 [2024-05-14 04:20:27.126439] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:12.623 [2024-05-14 04:20:27.126459] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:12.623 [2024-05-14 04:20:27.134435] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:12.623 [2024-05-14 04:20:27.134454] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:12.623 [2024-05-14 04:20:27.142428] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:12.623 [2024-05-14 04:20:27.142444] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:12.623 [2024-05-14 04:20:27.150443] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:12.623 [2024-05-14 04:20:27.150461] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:12.623 [2024-05-14 04:20:27.158445] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:12.623 [2024-05-14 04:20:27.158461] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:12.623 [2024-05-14 04:20:27.166434] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:12.623 [2024-05-14 04:20:27.166449] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:12.623 [2024-05-14 04:20:27.174456] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:12.623 [2024-05-14 04:20:27.174475] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:12.623 [2024-05-14 04:20:27.182452] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:12.623 [2024-05-14 04:20:27.182468] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:12.623 [2024-05-14 04:20:27.190443] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:12.623 [2024-05-14 04:20:27.190459] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:12.623 [2024-05-14 04:20:27.198453] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:12.623 [2024-05-14 04:20:27.198469] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:12.623 [2024-05-14 04:20:27.206446] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:12.623 [2024-05-14 04:20:27.206461] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:12.881 [2024-05-14 04:20:27.214469] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:12.881 [2024-05-14 04:20:27.214487] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:12.881 [2024-05-14 04:20:27.222482] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:12.881 [2024-05-14 04:20:27.222500] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:12.881 [2024-05-14 04:20:27.230465] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:12.881 [2024-05-14 04:20:27.230481] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:12.881 [2024-05-14 04:20:27.238477] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:12.881 [2024-05-14 04:20:27.238493] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:12.881 [2024-05-14 04:20:27.246477] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:12.881 [2024-05-14 04:20:27.246492] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:12.881 [2024-05-14 04:20:27.254470] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:12.881 [2024-05-14 04:20:27.254484] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:12.881 [2024-05-14 04:20:27.262484] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:12.881 [2024-05-14 04:20:27.262500] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:12.881 [2024-05-14 04:20:27.270487] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:12.881 [2024-05-14 04:20:27.270502] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:12.881 [2024-05-14 04:20:27.278514] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:12.881 [2024-05-14 04:20:27.278540] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:12.881 [2024-05-14 04:20:27.286502] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:12.882 [2024-05-14 04:20:27.286518] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:12.882 Running I/O for 5 seconds... 00:23:12.882 [2024-05-14 04:20:27.298915] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:12.882 [2024-05-14 04:20:27.298942] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:12.882 [2024-05-14 04:20:27.309801] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:12.882 [2024-05-14 04:20:27.309831] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:12.882 [2024-05-14 04:20:27.318880] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:12.882 [2024-05-14 04:20:27.318906] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:12.882 [2024-05-14 04:20:27.328100] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:12.882 [2024-05-14 04:20:27.328126] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:12.882 [2024-05-14 04:20:27.337583] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:12.882 [2024-05-14 04:20:27.337608] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:12.882 [2024-05-14 04:20:27.346515] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:12.882 [2024-05-14 04:20:27.346540] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:12.882 [2024-05-14 04:20:27.355854] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:12.882 [2024-05-14 04:20:27.355880] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:12.882 [2024-05-14 04:20:27.364832] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:12.882 [2024-05-14 04:20:27.364858] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:12.882 [2024-05-14 04:20:27.373599] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:12.882 [2024-05-14 04:20:27.373624] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:12.882 [2024-05-14 04:20:27.382796] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:12.882 [2024-05-14 04:20:27.382823] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:12.882 [2024-05-14 04:20:27.391768] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:12.882 [2024-05-14 04:20:27.391795] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:12.882 [2024-05-14 04:20:27.401597] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:12.882 [2024-05-14 04:20:27.401625] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:12.882 [2024-05-14 04:20:27.410992] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:12.882 [2024-05-14 04:20:27.411019] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:12.882 [2024-05-14 04:20:27.419928] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:12.882 [2024-05-14 04:20:27.419954] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:12.882 [2024-05-14 04:20:27.429657] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:12.882 [2024-05-14 04:20:27.429683] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:12.882 [2024-05-14 04:20:27.439194] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:12.882 [2024-05-14 04:20:27.439221] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:12.882 [2024-05-14 04:20:27.448300] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:12.882 [2024-05-14 04:20:27.448324] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:12.882 [2024-05-14 04:20:27.457008] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:12.882 [2024-05-14 04:20:27.457034] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:12.882 [2024-05-14 04:20:27.466202] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:12.882 [2024-05-14 04:20:27.466226] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:13.141 [2024-05-14 04:20:27.475050] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:13.141 [2024-05-14 04:20:27.475076] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:13.141 [2024-05-14 04:20:27.484157] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:13.141 [2024-05-14 04:20:27.484183] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:13.141 [2024-05-14 04:20:27.493366] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:13.141 [2024-05-14 04:20:27.493391] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:13.141 [2024-05-14 04:20:27.502679] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:13.141 [2024-05-14 04:20:27.502706] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:13.141 [2024-05-14 04:20:27.511766] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:13.141 [2024-05-14 04:20:27.511791] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:13.141 [2024-05-14 04:20:27.520905] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:13.141 [2024-05-14 04:20:27.520931] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:13.141 [2024-05-14 04:20:27.529957] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:13.141 [2024-05-14 04:20:27.529981] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:13.141 [2024-05-14 04:20:27.538998] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:13.141 [2024-05-14 04:20:27.539025] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:13.141 [2024-05-14 04:20:27.548394] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:13.141 [2024-05-14 04:20:27.548418] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:13.141 [2024-05-14 04:20:27.557321] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:13.141 [2024-05-14 04:20:27.557346] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:13.141 [2024-05-14 04:20:27.566313] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:13.141 [2024-05-14 04:20:27.566337] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:13.141 [2024-05-14 04:20:27.575684] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:13.141 [2024-05-14 04:20:27.575710] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:13.141 [2024-05-14 04:20:27.584640] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:13.141 [2024-05-14 04:20:27.584665] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:13.141 [2024-05-14 04:20:27.593387] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:13.141 [2024-05-14 04:20:27.593411] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:13.141 [2024-05-14 04:20:27.602454] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:13.141 [2024-05-14 04:20:27.602479] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:13.141 [2024-05-14 04:20:27.611220] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:13.141 [2024-05-14 04:20:27.611246] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:13.141 [2024-05-14 04:20:27.620532] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:13.141 [2024-05-14 04:20:27.620557] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:13.141 [2024-05-14 04:20:27.629552] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:13.141 [2024-05-14 04:20:27.629577] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:13.141 [2024-05-14 04:20:27.638736] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:13.141 [2024-05-14 04:20:27.638760] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:13.141 [2024-05-14 04:20:27.648058] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:13.141 [2024-05-14 04:20:27.648084] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:13.141 [2024-05-14 04:20:27.657457] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:13.141 [2024-05-14 04:20:27.657482] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:13.141 [2024-05-14 04:20:27.666980] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:13.141 [2024-05-14 04:20:27.667003] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:13.141 [2024-05-14 04:20:27.676303] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:13.141 [2024-05-14 04:20:27.676326] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:13.141 [2024-05-14 04:20:27.685439] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:13.141 [2024-05-14 04:20:27.685463] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:13.141 [2024-05-14 04:20:27.694716] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:13.141 [2024-05-14 04:20:27.694741] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:13.141 [2024-05-14 04:20:27.703831] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:13.141 [2024-05-14 04:20:27.703857] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:13.141 [2024-05-14 04:20:27.713048] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:13.141 [2024-05-14 04:20:27.713073] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:13.141 [2024-05-14 04:20:27.722167] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:13.141 [2024-05-14 04:20:27.722198] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:13.400 [2024-05-14 04:20:27.731342] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:13.400 [2024-05-14 04:20:27.731369] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:13.400 [2024-05-14 04:20:27.740515] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:13.400 [2024-05-14 04:20:27.740540] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:13.400 [2024-05-14 04:20:27.749246] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:13.400 [2024-05-14 04:20:27.749273] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:13.400 [2024-05-14 04:20:27.757997] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:13.401 [2024-05-14 04:20:27.758022] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:13.401 [2024-05-14 04:20:27.767115] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:13.401 [2024-05-14 04:20:27.767141] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:13.401 [2024-05-14 04:20:27.776960] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:13.401 [2024-05-14 04:20:27.776987] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:13.401 [2024-05-14 04:20:27.785993] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:13.401 [2024-05-14 04:20:27.786018] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:13.401 [2024-05-14 04:20:27.794941] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:13.401 [2024-05-14 04:20:27.794964] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:13.401 [2024-05-14 04:20:27.804361] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:13.401 [2024-05-14 04:20:27.804387] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:13.401 [2024-05-14 04:20:27.813017] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:13.401 [2024-05-14 04:20:27.813040] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:13.401 [2024-05-14 04:20:27.822263] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:13.401 [2024-05-14 04:20:27.822290] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:13.401 [2024-05-14 04:20:27.831633] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:13.401 [2024-05-14 04:20:27.831658] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:13.401 [2024-05-14 04:20:27.840774] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:13.401 [2024-05-14 04:20:27.840799] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:13.401 [2024-05-14 04:20:27.849699] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:13.401 [2024-05-14 04:20:27.849723] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:13.401 [2024-05-14 04:20:27.858483] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:13.401 [2024-05-14 04:20:27.858510] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:13.401 [2024-05-14 04:20:27.867548] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:13.401 [2024-05-14 04:20:27.867573] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:13.401 [2024-05-14 04:20:27.876650] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:13.401 [2024-05-14 04:20:27.876676] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:13.401 [2024-05-14 04:20:27.885213] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:13.401 [2024-05-14 04:20:27.885239] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:13.401 [2024-05-14 04:20:27.893992] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:13.401 [2024-05-14 04:20:27.894016] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:13.401 [2024-05-14 04:20:27.903148] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:13.401 [2024-05-14 04:20:27.903175] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:13.401 [2024-05-14 04:20:27.912661] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:13.401 [2024-05-14 04:20:27.912685] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:13.401 [2024-05-14 04:20:27.921844] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:13.401 [2024-05-14 04:20:27.921872] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:13.401 [2024-05-14 04:20:27.930890] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:13.401 [2024-05-14 04:20:27.930915] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:13.401 [2024-05-14 04:20:27.940257] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:13.401 [2024-05-14 04:20:27.940283] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:13.401 [2024-05-14 04:20:27.949765] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:13.401 [2024-05-14 04:20:27.949791] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:13.401 [2024-05-14 04:20:27.958687] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:13.401 [2024-05-14 04:20:27.958712] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:13.401 [2024-05-14 04:20:27.968388] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:13.401 [2024-05-14 04:20:27.968414] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:13.401 [2024-05-14 04:20:27.977658] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:13.401 [2024-05-14 04:20:27.977684] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:13.401 [2024-05-14 04:20:27.986936] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:13.401 [2024-05-14 04:20:27.986962] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:13.661 [2024-05-14 04:20:27.996416] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:13.661 [2024-05-14 04:20:27.996446] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:13.661 [2024-05-14 04:20:28.005708] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:13.661 [2024-05-14 04:20:28.005733] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:13.661 [2024-05-14 04:20:28.014422] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:13.661 [2024-05-14 04:20:28.014446] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:13.661 [2024-05-14 04:20:28.023790] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:13.661 [2024-05-14 04:20:28.023816] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:13.661 [2024-05-14 04:20:28.032797] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:13.661 [2024-05-14 04:20:28.032823] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:13.661 [2024-05-14 04:20:28.041621] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:13.661 [2024-05-14 04:20:28.041647] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:13.661 [2024-05-14 04:20:28.050274] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:13.661 [2024-05-14 04:20:28.050298] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:13.661 [2024-05-14 04:20:28.059468] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:13.661 [2024-05-14 04:20:28.059495] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:13.661 [2024-05-14 04:20:28.068365] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:13.661 [2024-05-14 04:20:28.068390] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:13.661 [2024-05-14 04:20:28.077141] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:13.661 [2024-05-14 04:20:28.077166] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:13.661 [2024-05-14 04:20:28.086346] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:13.661 [2024-05-14 04:20:28.086372] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:13.661 [2024-05-14 04:20:28.095199] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:13.661 [2024-05-14 04:20:28.095224] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:13.661 [2024-05-14 04:20:28.104224] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:13.661 [2024-05-14 04:20:28.104260] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:13.661 [2024-05-14 04:20:28.113678] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:13.661 [2024-05-14 04:20:28.113703] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:13.661 [2024-05-14 04:20:28.122298] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:13.661 [2024-05-14 04:20:28.122325] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:13.661 [2024-05-14 04:20:28.131287] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:13.661 [2024-05-14 04:20:28.131312] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:13.661 [2024-05-14 04:20:28.140468] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:13.661 [2024-05-14 04:20:28.140496] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:13.661 [2024-05-14 04:20:28.149376] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:13.661 [2024-05-14 04:20:28.149401] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:13.661 [2024-05-14 04:20:28.158890] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:13.661 [2024-05-14 04:20:28.158916] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:13.661 [2024-05-14 04:20:28.168008] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:13.661 [2024-05-14 04:20:28.168036] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:13.661 [2024-05-14 04:20:28.177338] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:13.661 [2024-05-14 04:20:28.177362] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:13.661 [2024-05-14 04:20:28.186614] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:13.661 [2024-05-14 04:20:28.186640] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:13.661 [2024-05-14 04:20:28.195865] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:13.661 [2024-05-14 04:20:28.195890] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:13.661 [2024-05-14 04:20:28.205203] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:13.661 [2024-05-14 04:20:28.205229] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:13.661 [2024-05-14 04:20:28.214373] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:13.661 [2024-05-14 04:20:28.214400] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:13.661 [2024-05-14 04:20:28.223568] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:13.661 [2024-05-14 04:20:28.223592] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:13.661 [2024-05-14 04:20:28.232821] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:13.661 [2024-05-14 04:20:28.232847] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:13.661 [2024-05-14 04:20:28.242346] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:13.661 [2024-05-14 04:20:28.242375] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:13.921 [2024-05-14 04:20:28.251664] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:13.921 [2024-05-14 04:20:28.251691] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:13.921 [2024-05-14 04:20:28.261012] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:13.921 [2024-05-14 04:20:28.261037] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:13.921 [2024-05-14 04:20:28.270437] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:13.921 [2024-05-14 04:20:28.270463] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:13.921 [2024-05-14 04:20:28.279698] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:13.921 [2024-05-14 04:20:28.279722] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:13.921 [2024-05-14 04:20:28.289280] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:13.921 [2024-05-14 04:20:28.289305] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:13.921 [2024-05-14 04:20:28.298648] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:13.921 [2024-05-14 04:20:28.298673] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:13.921 [2024-05-14 04:20:28.307969] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:13.921 [2024-05-14 04:20:28.307997] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:13.921 [2024-05-14 04:20:28.316781] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:13.921 [2024-05-14 04:20:28.316806] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:13.921 [2024-05-14 04:20:28.325568] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:13.921 [2024-05-14 04:20:28.325596] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:13.921 [2024-05-14 04:20:28.334629] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:13.921 [2024-05-14 04:20:28.334658] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:13.921 [2024-05-14 04:20:28.343918] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:13.921 [2024-05-14 04:20:28.343952] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:13.921 [2024-05-14 04:20:28.352820] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:13.921 [2024-05-14 04:20:28.352845] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:13.921 [2024-05-14 04:20:28.362104] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:13.921 [2024-05-14 04:20:28.362133] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:13.921 [2024-05-14 04:20:28.371342] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:13.921 [2024-05-14 04:20:28.371366] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:13.921 [2024-05-14 04:20:28.380174] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:13.921 [2024-05-14 04:20:28.380203] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:13.921 [2024-05-14 04:20:28.389446] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:13.921 [2024-05-14 04:20:28.389473] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:13.921 [2024-05-14 04:20:28.398784] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:13.921 [2024-05-14 04:20:28.398812] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:13.921 [2024-05-14 04:20:28.408782] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:13.922 [2024-05-14 04:20:28.408814] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:13.922 [2024-05-14 04:20:28.418267] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:13.922 [2024-05-14 04:20:28.418296] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:13.922 [2024-05-14 04:20:28.427173] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:13.922 [2024-05-14 04:20:28.427206] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:13.922 [2024-05-14 04:20:28.436642] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:13.922 [2024-05-14 04:20:28.436669] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:13.922 [2024-05-14 04:20:28.445875] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:13.922 [2024-05-14 04:20:28.445901] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:13.922 [2024-05-14 04:20:28.455235] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:13.922 [2024-05-14 04:20:28.455263] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:13.922 [2024-05-14 04:20:28.464340] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:13.922 [2024-05-14 04:20:28.464368] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:13.922 [2024-05-14 04:20:28.473349] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:13.922 [2024-05-14 04:20:28.473375] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:13.922 [2024-05-14 04:20:28.482518] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:13.922 [2024-05-14 04:20:28.482545] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:13.922 [2024-05-14 04:20:28.491640] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:13.922 [2024-05-14 04:20:28.491668] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:13.922 [2024-05-14 04:20:28.500866] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:13.922 [2024-05-14 04:20:28.500892] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:14.182 [2024-05-14 04:20:28.510149] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:14.182 [2024-05-14 04:20:28.510176] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:14.182 [2024-05-14 04:20:28.519603] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:14.182 [2024-05-14 04:20:28.519636] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:14.182 [2024-05-14 04:20:28.528911] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:14.182 [2024-05-14 04:20:28.528938] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:14.182 [2024-05-14 04:20:28.538180] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:14.182 [2024-05-14 04:20:28.538214] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:14.182 [2024-05-14 04:20:28.547442] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:14.183 [2024-05-14 04:20:28.547469] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:14.183 [2024-05-14 04:20:28.556768] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:14.183 [2024-05-14 04:20:28.556794] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:14.183 [2024-05-14 04:20:28.565933] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:14.183 [2024-05-14 04:20:28.565962] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:14.183 [2024-05-14 04:20:28.575596] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:14.183 [2024-05-14 04:20:28.575621] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:14.183 [2024-05-14 04:20:28.584468] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:14.183 [2024-05-14 04:20:28.584497] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:14.183 [2024-05-14 04:20:28.593758] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:14.183 [2024-05-14 04:20:28.593784] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:14.183 [2024-05-14 04:20:28.602736] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:14.183 [2024-05-14 04:20:28.602761] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:14.183 [2024-05-14 04:20:28.611856] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:14.183 [2024-05-14 04:20:28.611881] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:14.183 [2024-05-14 04:20:28.620502] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:14.183 [2024-05-14 04:20:28.620527] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:14.183 [2024-05-14 04:20:28.629845] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:14.183 [2024-05-14 04:20:28.629870] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:14.183 [2024-05-14 04:20:28.639783] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:14.183 [2024-05-14 04:20:28.639808] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:14.183 [2024-05-14 04:20:28.648798] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:14.183 [2024-05-14 04:20:28.648826] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:14.183 [2024-05-14 04:20:28.657503] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:14.183 [2024-05-14 04:20:28.657531] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:14.183 [2024-05-14 04:20:28.666327] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:14.183 [2024-05-14 04:20:28.666353] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:14.183 [2024-05-14 04:20:28.675441] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:14.183 [2024-05-14 04:20:28.675466] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:14.183 [2024-05-14 04:20:28.684120] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:14.183 [2024-05-14 04:20:28.684146] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:14.183 [2024-05-14 04:20:28.693382] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:14.183 [2024-05-14 04:20:28.693412] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:14.183 [2024-05-14 04:20:28.702821] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:14.183 [2024-05-14 04:20:28.702850] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:14.183 [2024-05-14 04:20:28.711235] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:14.183 [2024-05-14 04:20:28.711261] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:14.183 [2024-05-14 04:20:28.720374] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:14.183 [2024-05-14 04:20:28.720403] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:14.183 [2024-05-14 04:20:28.729206] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:14.183 [2024-05-14 04:20:28.729234] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:14.183 [2024-05-14 04:20:28.737809] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:14.183 [2024-05-14 04:20:28.737834] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:14.183 [2024-05-14 04:20:28.746993] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:14.183 [2024-05-14 04:20:28.747018] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:14.183 [2024-05-14 04:20:28.756406] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:14.183 [2024-05-14 04:20:28.756434] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:14.183 [2024-05-14 04:20:28.765677] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:14.183 [2024-05-14 04:20:28.765707] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:14.443 [2024-05-14 04:20:28.774916] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:14.443 [2024-05-14 04:20:28.774945] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:14.443 [2024-05-14 04:20:28.784313] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:14.443 [2024-05-14 04:20:28.784340] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:14.443 [2024-05-14 04:20:28.793327] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:14.443 [2024-05-14 04:20:28.793354] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:14.443 [2024-05-14 04:20:28.802410] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:14.443 [2024-05-14 04:20:28.802436] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:14.443 [2024-05-14 04:20:28.811474] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:14.443 [2024-05-14 04:20:28.811500] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:14.443 [2024-05-14 04:20:28.821053] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:14.443 [2024-05-14 04:20:28.821078] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:14.443 [2024-05-14 04:20:28.829494] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:14.443 [2024-05-14 04:20:28.829522] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:14.443 [2024-05-14 04:20:28.838737] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:14.443 [2024-05-14 04:20:28.838762] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:14.443 [2024-05-14 04:20:28.847491] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:14.443 [2024-05-14 04:20:28.847517] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:14.443 [2024-05-14 04:20:28.856767] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:14.443 [2024-05-14 04:20:28.856792] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:14.443 [2024-05-14 04:20:28.866057] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:14.443 [2024-05-14 04:20:28.866084] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:14.443 [2024-05-14 04:20:28.875415] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:14.443 [2024-05-14 04:20:28.875444] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:14.443 [2024-05-14 04:20:28.884627] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:14.443 [2024-05-14 04:20:28.884655] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:14.443 [2024-05-14 04:20:28.893765] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:14.443 [2024-05-14 04:20:28.893793] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:14.443 [2024-05-14 04:20:28.902673] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:14.443 [2024-05-14 04:20:28.902700] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:14.443 [2024-05-14 04:20:28.911935] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:14.443 [2024-05-14 04:20:28.911961] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:14.443 [2024-05-14 04:20:28.921240] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:14.443 [2024-05-14 04:20:28.921267] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:14.443 [2024-05-14 04:20:28.930476] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:14.443 [2024-05-14 04:20:28.930501] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:14.443 [2024-05-14 04:20:28.939361] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:14.443 [2024-05-14 04:20:28.939385] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:14.443 [2024-05-14 04:20:28.948660] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:14.443 [2024-05-14 04:20:28.948687] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:14.443 [2024-05-14 04:20:28.957979] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:14.443 [2024-05-14 04:20:28.958004] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:14.443 [2024-05-14 04:20:28.967231] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:14.443 [2024-05-14 04:20:28.967257] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:14.443 [2024-05-14 04:20:28.976664] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:14.443 [2024-05-14 04:20:28.976689] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:14.443 [2024-05-14 04:20:28.985111] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:14.443 [2024-05-14 04:20:28.985134] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:14.443 [2024-05-14 04:20:28.994345] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:14.443 [2024-05-14 04:20:28.994369] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:14.443 [2024-05-14 04:20:29.002641] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:14.443 [2024-05-14 04:20:29.002666] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:14.443 [2024-05-14 04:20:29.011862] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:14.443 [2024-05-14 04:20:29.011887] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:14.443 [2024-05-14 04:20:29.020820] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:14.443 [2024-05-14 04:20:29.020844] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:14.701 [2024-05-14 04:20:29.029869] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:14.701 [2024-05-14 04:20:29.029894] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:14.701 [2024-05-14 04:20:29.038948] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:14.701 [2024-05-14 04:20:29.038973] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:14.701 [2024-05-14 04:20:29.047942] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:14.701 [2024-05-14 04:20:29.047969] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:14.701 [2024-05-14 04:20:29.057262] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:14.701 [2024-05-14 04:20:29.057287] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:14.701 [2024-05-14 04:20:29.065552] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:14.701 [2024-05-14 04:20:29.065577] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:14.701 [2024-05-14 04:20:29.075137] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:14.701 [2024-05-14 04:20:29.075161] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:14.701 [2024-05-14 04:20:29.084398] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:14.701 [2024-05-14 04:20:29.084422] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:14.701 [2024-05-14 04:20:29.093665] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:14.701 [2024-05-14 04:20:29.093689] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:14.701 [2024-05-14 04:20:29.102894] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:14.701 [2024-05-14 04:20:29.102917] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:14.701 [2024-05-14 04:20:29.112174] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:14.701 [2024-05-14 04:20:29.112204] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:14.701 [2024-05-14 04:20:29.121516] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:14.701 [2024-05-14 04:20:29.121541] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:14.701 [2024-05-14 04:20:29.130574] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:14.701 [2024-05-14 04:20:29.130600] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:14.701 [2024-05-14 04:20:29.139800] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:14.701 [2024-05-14 04:20:29.139827] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:14.701 [2024-05-14 04:20:29.149433] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:14.701 [2024-05-14 04:20:29.149461] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:14.701 [2024-05-14 04:20:29.158719] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:14.701 [2024-05-14 04:20:29.158745] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:14.701 [2024-05-14 04:20:29.167495] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:14.701 [2024-05-14 04:20:29.167519] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:14.701 [2024-05-14 04:20:29.176344] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:14.701 [2024-05-14 04:20:29.176375] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:14.701 [2024-05-14 04:20:29.185421] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:14.701 [2024-05-14 04:20:29.185444] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:14.701 [2024-05-14 04:20:29.194195] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:14.701 [2024-05-14 04:20:29.194222] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:14.702 [2024-05-14 04:20:29.203400] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:14.702 [2024-05-14 04:20:29.203426] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:14.702 [2024-05-14 04:20:29.212391] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:14.702 [2024-05-14 04:20:29.212417] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:14.702 [2024-05-14 04:20:29.221615] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:14.702 [2024-05-14 04:20:29.221640] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:14.702 [2024-05-14 04:20:29.231059] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:14.702 [2024-05-14 04:20:29.231083] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:14.702 [2024-05-14 04:20:29.239945] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:14.702 [2024-05-14 04:20:29.239969] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:14.702 [2024-05-14 04:20:29.248627] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:14.702 [2024-05-14 04:20:29.248650] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:14.702 [2024-05-14 04:20:29.258249] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:14.702 [2024-05-14 04:20:29.258278] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:14.702 [2024-05-14 04:20:29.267400] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:14.702 [2024-05-14 04:20:29.267427] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:14.702 [2024-05-14 04:20:29.276100] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:14.702 [2024-05-14 04:20:29.276125] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:14.702 [2024-05-14 04:20:29.285483] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:14.702 [2024-05-14 04:20:29.285509] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:14.959 [2024-05-14 04:20:29.294697] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:14.959 [2024-05-14 04:20:29.294723] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:14.959 [2024-05-14 04:20:29.302920] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:14.959 [2024-05-14 04:20:29.302946] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:14.959 [2024-05-14 04:20:29.312535] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:14.960 [2024-05-14 04:20:29.312560] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:14.960 [2024-05-14 04:20:29.320834] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:14.960 [2024-05-14 04:20:29.320857] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:14.960 [2024-05-14 04:20:29.330054] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:14.960 [2024-05-14 04:20:29.330078] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:14.960 [2024-05-14 04:20:29.339480] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:14.960 [2024-05-14 04:20:29.339506] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:14.960 [2024-05-14 04:20:29.348356] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:14.960 [2024-05-14 04:20:29.348382] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:14.960 [2024-05-14 04:20:29.357693] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:14.960 [2024-05-14 04:20:29.357718] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:14.960 [2024-05-14 04:20:29.367086] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:14.960 [2024-05-14 04:20:29.367113] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:14.960 [2024-05-14 04:20:29.375721] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:14.960 [2024-05-14 04:20:29.375745] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:14.960 [2024-05-14 04:20:29.384853] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:14.960 [2024-05-14 04:20:29.384880] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:14.960 [2024-05-14 04:20:29.394376] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:14.960 [2024-05-14 04:20:29.394402] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:14.960 [2024-05-14 04:20:29.403656] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:14.960 [2024-05-14 04:20:29.403680] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:14.960 [2024-05-14 04:20:29.411988] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:14.960 [2024-05-14 04:20:29.412011] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:14.960 [2024-05-14 04:20:29.421218] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:14.960 [2024-05-14 04:20:29.421242] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:14.960 [2024-05-14 04:20:29.430020] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:14.960 [2024-05-14 04:20:29.430048] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:14.960 [2024-05-14 04:20:29.439320] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:14.960 [2024-05-14 04:20:29.439345] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:14.960 [2024-05-14 04:20:29.448231] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:14.960 [2024-05-14 04:20:29.448258] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:14.960 [2024-05-14 04:20:29.457458] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:14.960 [2024-05-14 04:20:29.457484] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:14.960 [2024-05-14 04:20:29.466427] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:14.960 [2024-05-14 04:20:29.466453] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:14.960 [2024-05-14 04:20:29.475506] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:14.960 [2024-05-14 04:20:29.475533] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:14.960 [2024-05-14 04:20:29.484303] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:14.960 [2024-05-14 04:20:29.484328] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:14.960 [2024-05-14 04:20:29.492983] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:14.960 [2024-05-14 04:20:29.493008] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:14.960 [2024-05-14 04:20:29.502022] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:14.960 [2024-05-14 04:20:29.502046] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:14.960 [2024-05-14 04:20:29.510892] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:14.960 [2024-05-14 04:20:29.510919] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:14.960 [2024-05-14 04:20:29.519686] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:14.960 [2024-05-14 04:20:29.519711] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:14.960 [2024-05-14 04:20:29.528471] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:14.960 [2024-05-14 04:20:29.528497] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:14.960 [2024-05-14 04:20:29.537635] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:14.960 [2024-05-14 04:20:29.537659] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:15.219 [2024-05-14 04:20:29.546890] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:15.219 [2024-05-14 04:20:29.546922] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:15.219 [2024-05-14 04:20:29.556303] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:15.219 [2024-05-14 04:20:29.556328] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:15.219 [2024-05-14 04:20:29.564989] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:15.219 [2024-05-14 04:20:29.565015] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:15.219 [2024-05-14 04:20:29.574148] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:15.219 [2024-05-14 04:20:29.574174] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:15.219 [2024-05-14 04:20:29.583472] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:15.219 [2024-05-14 04:20:29.583498] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:15.219 [2024-05-14 04:20:29.592655] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:15.220 [2024-05-14 04:20:29.592679] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:15.220 [2024-05-14 04:20:29.601955] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:15.220 [2024-05-14 04:20:29.601979] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:15.220 [2024-05-14 04:20:29.610725] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:15.220 [2024-05-14 04:20:29.610747] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:15.220 [2024-05-14 04:20:29.619480] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:15.220 [2024-05-14 04:20:29.619506] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:15.220 [2024-05-14 04:20:29.628765] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:15.220 [2024-05-14 04:20:29.628790] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:15.220 [2024-05-14 04:20:29.638402] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:15.220 [2024-05-14 04:20:29.638428] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:15.220 [2024-05-14 04:20:29.647270] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:15.220 [2024-05-14 04:20:29.647296] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:15.220 [2024-05-14 04:20:29.656421] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:15.220 [2024-05-14 04:20:29.656447] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:15.220 [2024-05-14 04:20:29.664561] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:15.220 [2024-05-14 04:20:29.664583] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:15.220 [2024-05-14 04:20:29.673764] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:15.220 [2024-05-14 04:20:29.673791] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:15.220 [2024-05-14 04:20:29.683335] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:15.220 [2024-05-14 04:20:29.683360] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:15.220 [2024-05-14 04:20:29.692053] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:15.220 [2024-05-14 04:20:29.692078] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:15.220 [2024-05-14 04:20:29.701345] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:15.220 [2024-05-14 04:20:29.701369] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:15.220 [2024-05-14 04:20:29.710686] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:15.220 [2024-05-14 04:20:29.710713] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:15.220 [2024-05-14 04:20:29.719722] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:15.220 [2024-05-14 04:20:29.719752] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:15.220 [2024-05-14 04:20:29.729032] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:15.220 [2024-05-14 04:20:29.729058] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:15.220 [2024-05-14 04:20:29.738463] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:15.220 [2024-05-14 04:20:29.738490] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:15.220 [2024-05-14 04:20:29.747781] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:15.220 [2024-05-14 04:20:29.747807] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:15.220 [2024-05-14 04:20:29.756832] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:15.220 [2024-05-14 04:20:29.756855] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:15.220 [2024-05-14 04:20:29.765830] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:15.220 [2024-05-14 04:20:29.765856] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:15.220 [2024-05-14 04:20:29.775582] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:15.220 [2024-05-14 04:20:29.775608] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:15.220 [2024-05-14 04:20:29.784108] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:15.220 [2024-05-14 04:20:29.784133] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:15.220 [2024-05-14 04:20:29.793787] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:15.220 [2024-05-14 04:20:29.793813] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:15.220 [2024-05-14 04:20:29.801965] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:15.220 [2024-05-14 04:20:29.801990] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:15.479 [2024-05-14 04:20:29.811272] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:15.480 [2024-05-14 04:20:29.811297] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:15.480 [2024-05-14 04:20:29.820295] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:15.480 [2024-05-14 04:20:29.820319] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:15.480 [2024-05-14 04:20:29.829082] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:15.480 [2024-05-14 04:20:29.829108] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:15.480 [2024-05-14 04:20:29.838385] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:15.480 [2024-05-14 04:20:29.838410] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:15.480 [2024-05-14 04:20:29.847396] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:15.480 [2024-05-14 04:20:29.847423] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:15.480 [2024-05-14 04:20:29.856602] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:15.480 [2024-05-14 04:20:29.856627] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:15.480 [2024-05-14 04:20:29.865659] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:15.480 [2024-05-14 04:20:29.865686] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:15.480 [2024-05-14 04:20:29.874792] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:15.480 [2024-05-14 04:20:29.874818] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:15.480 [2024-05-14 04:20:29.884001] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:15.480 [2024-05-14 04:20:29.884028] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:15.480 [2024-05-14 04:20:29.893254] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:15.480 [2024-05-14 04:20:29.893284] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:15.480 [2024-05-14 04:20:29.902033] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:15.480 [2024-05-14 04:20:29.902058] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:15.480 [2024-05-14 04:20:29.911379] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:15.480 [2024-05-14 04:20:29.911406] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:15.480 [2024-05-14 04:20:29.920707] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:15.480 [2024-05-14 04:20:29.920734] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:15.480 [2024-05-14 04:20:29.929621] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:15.480 [2024-05-14 04:20:29.929646] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:15.480 [2024-05-14 04:20:29.938892] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:15.480 [2024-05-14 04:20:29.938919] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:15.480 [2024-05-14 04:20:29.948170] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:15.480 [2024-05-14 04:20:29.948202] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:15.480 [2024-05-14 04:20:29.957098] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:15.480 [2024-05-14 04:20:29.957125] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:15.480 [2024-05-14 04:20:29.966054] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:15.480 [2024-05-14 04:20:29.966079] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:15.480 [2024-05-14 04:20:29.974965] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:15.480 [2024-05-14 04:20:29.974990] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:15.480 [2024-05-14 04:20:29.984277] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:15.480 [2024-05-14 04:20:29.984301] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:15.480 [2024-05-14 04:20:29.993710] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:15.480 [2024-05-14 04:20:29.993734] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:15.480 [2024-05-14 04:20:30.003852] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:15.480 [2024-05-14 04:20:30.003884] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:15.480 [2024-05-14 04:20:30.012453] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:15.480 [2024-05-14 04:20:30.012482] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:15.480 [2024-05-14 04:20:30.023369] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:15.480 [2024-05-14 04:20:30.023403] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:15.480 [2024-05-14 04:20:30.033983] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:15.480 [2024-05-14 04:20:30.034010] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:15.480 [2024-05-14 04:20:30.043159] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:15.480 [2024-05-14 04:20:30.043196] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:15.480 [2024-05-14 04:20:30.053109] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:15.480 [2024-05-14 04:20:30.053141] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:15.480 [2024-05-14 04:20:30.062222] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:15.480 [2024-05-14 04:20:30.062247] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:15.740 [2024-05-14 04:20:30.070944] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:15.740 [2024-05-14 04:20:30.070976] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:15.740 [2024-05-14 04:20:30.079758] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:15.740 [2024-05-14 04:20:30.079784] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:15.740 [2024-05-14 04:20:30.089145] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:15.740 [2024-05-14 04:20:30.089170] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:15.740 [2024-05-14 04:20:30.097489] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:15.740 [2024-05-14 04:20:30.097513] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:15.740 [2024-05-14 04:20:30.106343] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:15.740 [2024-05-14 04:20:30.106370] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:15.740 [2024-05-14 04:20:30.115918] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:15.740 [2024-05-14 04:20:30.115946] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:15.740 [2024-05-14 04:20:30.124697] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:15.740 [2024-05-14 04:20:30.124723] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:15.740 [2024-05-14 04:20:30.133919] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:15.740 [2024-05-14 04:20:30.133948] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:15.740 [2024-05-14 04:20:30.143305] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:15.740 [2024-05-14 04:20:30.143331] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:15.740 [2024-05-14 04:20:30.152317] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:15.740 [2024-05-14 04:20:30.152345] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:15.740 [2024-05-14 04:20:30.161274] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:15.740 [2024-05-14 04:20:30.161301] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:15.740 [2024-05-14 04:20:30.170668] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:15.740 [2024-05-14 04:20:30.170694] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:15.740 [2024-05-14 04:20:30.179767] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:15.740 [2024-05-14 04:20:30.179795] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:15.740 [2024-05-14 04:20:30.188572] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:15.740 [2024-05-14 04:20:30.188597] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:15.740 [2024-05-14 04:20:30.197322] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:15.740 [2024-05-14 04:20:30.197347] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:15.740 [2024-05-14 04:20:30.206086] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:15.740 [2024-05-14 04:20:30.206111] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:15.740 [2024-05-14 04:20:30.215258] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:15.740 [2024-05-14 04:20:30.215283] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:15.740 [2024-05-14 04:20:30.224480] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:15.740 [2024-05-14 04:20:30.224508] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:15.740 [2024-05-14 04:20:30.233989] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:15.740 [2024-05-14 04:20:30.234016] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:15.740 [2024-05-14 04:20:30.243553] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:15.740 [2024-05-14 04:20:30.243580] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:15.740 [2024-05-14 04:20:30.252569] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:15.740 [2024-05-14 04:20:30.252595] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:15.740 [2024-05-14 04:20:30.261828] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:15.740 [2024-05-14 04:20:30.261854] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:15.740 [2024-05-14 04:20:30.271044] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:15.740 [2024-05-14 04:20:30.271071] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:15.740 [2024-05-14 04:20:30.279947] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:15.740 [2024-05-14 04:20:30.279974] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:15.740 [2024-05-14 04:20:30.288397] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:15.740 [2024-05-14 04:20:30.288422] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:15.740 [2024-05-14 04:20:30.297600] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:15.740 [2024-05-14 04:20:30.297626] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:15.740 [2024-05-14 04:20:30.307245] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:15.740 [2024-05-14 04:20:30.307271] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:15.740 [2024-05-14 04:20:30.316230] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:15.740 [2024-05-14 04:20:30.316257] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:15.741 [2024-05-14 04:20:30.325414] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:15.741 [2024-05-14 04:20:30.325441] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:16.002 [2024-05-14 04:20:30.334367] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:16.002 [2024-05-14 04:20:30.334394] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:16.002 [2024-05-14 04:20:30.343576] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:16.002 [2024-05-14 04:20:30.343602] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:16.002 [2024-05-14 04:20:30.352423] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:16.002 [2024-05-14 04:20:30.352449] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:16.002 [2024-05-14 04:20:30.361494] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:16.002 [2024-05-14 04:20:30.361522] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:16.002 [2024-05-14 04:20:30.370647] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:16.002 [2024-05-14 04:20:30.370673] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:16.002 [2024-05-14 04:20:30.379384] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:16.002 [2024-05-14 04:20:30.379411] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:16.002 [2024-05-14 04:20:30.388505] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:16.002 [2024-05-14 04:20:30.388531] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:16.002 [2024-05-14 04:20:30.397919] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:16.002 [2024-05-14 04:20:30.397946] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:16.002 [2024-05-14 04:20:30.407399] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:16.002 [2024-05-14 04:20:30.407425] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:16.002 [2024-05-14 04:20:30.416315] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:16.002 [2024-05-14 04:20:30.416341] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:16.002 [2024-05-14 04:20:30.425226] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:16.002 [2024-05-14 04:20:30.425253] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:16.002 [2024-05-14 04:20:30.434432] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:16.002 [2024-05-14 04:20:30.434459] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:16.002 [2024-05-14 04:20:30.443193] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:16.002 [2024-05-14 04:20:30.443218] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:16.002 [2024-05-14 04:20:30.452151] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:16.002 [2024-05-14 04:20:30.452177] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:16.002 [2024-05-14 04:20:30.461451] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:16.002 [2024-05-14 04:20:30.461477] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:16.002 [2024-05-14 04:20:30.470268] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:16.002 [2024-05-14 04:20:30.470293] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:16.002 [2024-05-14 04:20:30.478989] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:16.002 [2024-05-14 04:20:30.479016] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:16.002 [2024-05-14 04:20:30.487849] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:16.002 [2024-05-14 04:20:30.487874] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:16.002 [2024-05-14 04:20:30.497102] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:16.002 [2024-05-14 04:20:30.497128] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:16.002 [2024-05-14 04:20:30.506010] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:16.002 [2024-05-14 04:20:30.506034] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:16.002 [2024-05-14 04:20:30.515340] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:16.002 [2024-05-14 04:20:30.515367] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:16.002 [2024-05-14 04:20:30.524591] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:16.002 [2024-05-14 04:20:30.524617] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:16.002 [2024-05-14 04:20:30.533809] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:16.002 [2024-05-14 04:20:30.533833] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:16.002 [2024-05-14 04:20:30.542586] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:16.002 [2024-05-14 04:20:30.542612] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:16.002 [2024-05-14 04:20:30.551914] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:16.002 [2024-05-14 04:20:30.551941] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:16.002 [2024-05-14 04:20:30.560780] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:16.002 [2024-05-14 04:20:30.560805] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:16.002 [2024-05-14 04:20:30.570114] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:16.002 [2024-05-14 04:20:30.570140] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:16.002 [2024-05-14 04:20:30.579112] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:16.002 [2024-05-14 04:20:30.579137] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:16.002 [2024-05-14 04:20:30.588627] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:16.002 [2024-05-14 04:20:30.588652] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:16.262 [2024-05-14 04:20:30.597843] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:16.262 [2024-05-14 04:20:30.597868] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:16.262 [2024-05-14 04:20:30.607141] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:16.262 [2024-05-14 04:20:30.607168] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:16.262 [2024-05-14 04:20:30.616149] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:16.262 [2024-05-14 04:20:30.616175] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:16.262 [2024-05-14 04:20:30.625581] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:16.262 [2024-05-14 04:20:30.625608] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:16.262 [2024-05-14 04:20:30.634880] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:16.262 [2024-05-14 04:20:30.634905] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:16.262 [2024-05-14 04:20:30.644370] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:16.262 [2024-05-14 04:20:30.644398] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:16.262 [2024-05-14 04:20:30.653750] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:16.262 [2024-05-14 04:20:30.653776] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:16.262 [2024-05-14 04:20:30.663053] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:16.262 [2024-05-14 04:20:30.663077] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:16.262 [2024-05-14 04:20:30.672454] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:16.262 [2024-05-14 04:20:30.672480] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:16.262 [2024-05-14 04:20:30.681574] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:16.262 [2024-05-14 04:20:30.681600] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:16.262 [2024-05-14 04:20:30.691093] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:16.262 [2024-05-14 04:20:30.691117] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:16.262 [2024-05-14 04:20:30.699877] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:16.262 [2024-05-14 04:20:30.699903] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:16.262 [2024-05-14 04:20:30.709146] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:16.262 [2024-05-14 04:20:30.709171] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:16.262 [2024-05-14 04:20:30.718341] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:16.262 [2024-05-14 04:20:30.718368] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:16.262 [2024-05-14 04:20:30.727665] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:16.262 [2024-05-14 04:20:30.727693] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:16.262 [2024-05-14 04:20:30.736916] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:16.262 [2024-05-14 04:20:30.736941] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:16.262 [2024-05-14 04:20:30.746291] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:16.262 [2024-05-14 04:20:30.746318] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:16.262 [2024-05-14 04:20:30.755640] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:16.262 [2024-05-14 04:20:30.755665] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:16.262 [2024-05-14 04:20:30.773478] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:16.262 [2024-05-14 04:20:30.773505] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:16.262 [2024-05-14 04:20:30.781963] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:16.262 [2024-05-14 04:20:30.781989] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:16.262 [2024-05-14 04:20:30.791059] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:16.262 [2024-05-14 04:20:30.791082] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:16.262 [2024-05-14 04:20:30.799858] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:16.262 [2024-05-14 04:20:30.799883] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:16.262 [2024-05-14 04:20:30.808777] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:16.262 [2024-05-14 04:20:30.808804] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:16.262 [2024-05-14 04:20:30.817938] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:16.262 [2024-05-14 04:20:30.817961] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:16.262 [2024-05-14 04:20:30.827092] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:16.263 [2024-05-14 04:20:30.827118] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:16.263 [2024-05-14 04:20:30.836429] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:16.263 [2024-05-14 04:20:30.836453] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:16.263 [2024-05-14 04:20:30.845906] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:16.263 [2024-05-14 04:20:30.845933] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:16.520 [2024-05-14 04:20:30.854786] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:16.520 [2024-05-14 04:20:30.854810] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:16.520 [2024-05-14 04:20:30.864070] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:16.520 [2024-05-14 04:20:30.864095] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:16.520 [2024-05-14 04:20:30.872899] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:16.520 [2024-05-14 04:20:30.872923] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:16.520 [2024-05-14 04:20:30.882491] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:16.520 [2024-05-14 04:20:30.882518] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:16.521 [2024-05-14 04:20:30.891393] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:16.521 [2024-05-14 04:20:30.891418] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:16.521 [2024-05-14 04:20:30.900522] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:16.521 [2024-05-14 04:20:30.900547] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:16.521 [2024-05-14 04:20:30.909237] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:16.521 [2024-05-14 04:20:30.909262] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:16.521 [2024-05-14 04:20:30.918600] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:16.521 [2024-05-14 04:20:30.918627] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:16.521 [2024-05-14 04:20:30.927393] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:16.521 [2024-05-14 04:20:30.927418] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:16.521 [2024-05-14 04:20:30.936594] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:16.521 [2024-05-14 04:20:30.936625] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:16.521 [2024-05-14 04:20:30.945773] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:16.521 [2024-05-14 04:20:30.945798] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:16.521 [2024-05-14 04:20:30.955239] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:16.521 [2024-05-14 04:20:30.955264] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:16.521 [2024-05-14 04:20:30.964370] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:16.521 [2024-05-14 04:20:30.964394] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:16.521 [2024-05-14 04:20:30.973512] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:16.521 [2024-05-14 04:20:30.973536] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:16.521 [2024-05-14 04:20:30.982363] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:16.521 [2024-05-14 04:20:30.982387] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:16.521 [2024-05-14 04:20:30.991262] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:16.521 [2024-05-14 04:20:30.991286] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:16.521 [2024-05-14 04:20:31.000091] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:16.521 [2024-05-14 04:20:31.000114] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:16.521 [2024-05-14 04:20:31.009415] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:16.521 [2024-05-14 04:20:31.009438] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:16.521 [2024-05-14 04:20:31.017771] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:16.521 [2024-05-14 04:20:31.017795] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:16.521 [2024-05-14 04:20:31.027086] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:16.521 [2024-05-14 04:20:31.027110] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:16.521 [2024-05-14 04:20:31.035746] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:16.521 [2024-05-14 04:20:31.035771] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:16.521 [2024-05-14 04:20:31.045055] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:16.521 [2024-05-14 04:20:31.045080] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:16.521 [2024-05-14 04:20:31.054121] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:16.521 [2024-05-14 04:20:31.054146] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:16.521 [2024-05-14 04:20:31.063213] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:16.521 [2024-05-14 04:20:31.063238] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:16.521 [2024-05-14 04:20:31.072704] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:16.521 [2024-05-14 04:20:31.072731] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:16.521 [2024-05-14 04:20:31.081968] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:16.521 [2024-05-14 04:20:31.081991] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:16.521 [2024-05-14 04:20:31.091473] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:16.521 [2024-05-14 04:20:31.091499] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:16.521 [2024-05-14 04:20:31.099637] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:16.521 [2024-05-14 04:20:31.099662] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:16.779 [2024-05-14 04:20:31.108898] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:16.779 [2024-05-14 04:20:31.108930] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:16.779 [2024-05-14 04:20:31.117759] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:16.779 [2024-05-14 04:20:31.117783] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:16.779 [2024-05-14 04:20:31.126500] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:16.779 [2024-05-14 04:20:31.126526] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:16.779 [2024-05-14 04:20:31.135687] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:16.779 [2024-05-14 04:20:31.135713] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:16.779 [2024-05-14 04:20:31.144537] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:16.779 [2024-05-14 04:20:31.144564] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:16.779 [2024-05-14 04:20:31.153695] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:16.779 [2024-05-14 04:20:31.153720] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:16.779 [2024-05-14 04:20:31.162687] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:16.779 [2024-05-14 04:20:31.162713] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:16.779 [2024-05-14 04:20:31.171840] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:16.779 [2024-05-14 04:20:31.171866] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:16.779 [2024-05-14 04:20:31.180999] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:16.779 [2024-05-14 04:20:31.181025] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:16.779 [2024-05-14 04:20:31.189793] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:16.779 [2024-05-14 04:20:31.189819] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:16.779 [2024-05-14 04:20:31.198787] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:16.779 [2024-05-14 04:20:31.198812] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:16.779 [2024-05-14 04:20:31.208480] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:16.779 [2024-05-14 04:20:31.208506] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:16.779 [2024-05-14 04:20:31.217365] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:16.779 [2024-05-14 04:20:31.217389] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:16.779 [2024-05-14 04:20:31.226977] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:16.779 [2024-05-14 04:20:31.227004] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:16.779 [2024-05-14 04:20:31.235278] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:16.779 [2024-05-14 04:20:31.235301] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:16.779 [2024-05-14 04:20:31.244615] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:16.779 [2024-05-14 04:20:31.244641] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:16.779 [2024-05-14 04:20:31.253345] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:16.779 [2024-05-14 04:20:31.253368] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:16.779 [2024-05-14 04:20:31.262319] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:16.779 [2024-05-14 04:20:31.262345] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:16.779 [2024-05-14 04:20:31.271744] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:16.779 [2024-05-14 04:20:31.271769] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:16.779 [2024-05-14 04:20:31.281053] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:16.779 [2024-05-14 04:20:31.281085] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:16.779 [2024-05-14 04:20:31.289940] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:16.779 [2024-05-14 04:20:31.289965] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:16.779 [2024-05-14 04:20:31.298968] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:16.779 [2024-05-14 04:20:31.298995] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:16.779 [2024-05-14 04:20:31.308298] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:16.779 [2024-05-14 04:20:31.308322] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:16.779 [2024-05-14 04:20:31.316567] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:16.779 [2024-05-14 04:20:31.316593] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:16.779 [2024-05-14 04:20:31.326268] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:16.779 [2024-05-14 04:20:31.326292] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:16.779 [2024-05-14 04:20:31.335117] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:16.779 [2024-05-14 04:20:31.335145] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:16.779 [2024-05-14 04:20:31.343843] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:16.779 [2024-05-14 04:20:31.343868] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:16.779 [2024-05-14 04:20:31.352981] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:16.779 [2024-05-14 04:20:31.353005] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:16.779 [2024-05-14 04:20:31.361891] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:16.779 [2024-05-14 04:20:31.361917] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:17.037 [2024-05-14 04:20:31.371263] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:17.037 [2024-05-14 04:20:31.371287] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:17.037 [2024-05-14 04:20:31.380504] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:17.037 [2024-05-14 04:20:31.380528] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:17.037 [2024-05-14 04:20:31.389786] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:17.037 [2024-05-14 04:20:31.389810] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:17.037 [2024-05-14 04:20:31.399294] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:17.037 [2024-05-14 04:20:31.399317] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:17.037 [2024-05-14 04:20:31.408489] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:17.038 [2024-05-14 04:20:31.408515] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:17.038 [2024-05-14 04:20:31.417654] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:17.038 [2024-05-14 04:20:31.417680] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:17.038 [2024-05-14 04:20:31.426917] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:17.038 [2024-05-14 04:20:31.426941] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:17.038 [2024-05-14 04:20:31.436335] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:17.038 [2024-05-14 04:20:31.436361] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:17.038 [2024-05-14 04:20:31.445605] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:17.038 [2024-05-14 04:20:31.445630] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:17.038 [2024-05-14 04:20:31.454329] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:17.038 [2024-05-14 04:20:31.454361] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:17.038 [2024-05-14 04:20:31.463307] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:17.038 [2024-05-14 04:20:31.463333] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:17.038 [2024-05-14 04:20:31.472174] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:17.038 [2024-05-14 04:20:31.472207] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:17.038 [2024-05-14 04:20:31.481487] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:17.038 [2024-05-14 04:20:31.481512] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:17.038 [2024-05-14 04:20:31.490315] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:17.038 [2024-05-14 04:20:31.490340] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:17.038 [2024-05-14 04:20:31.499345] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:17.038 [2024-05-14 04:20:31.499369] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:17.038 [2024-05-14 04:20:31.508765] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:17.038 [2024-05-14 04:20:31.508791] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:17.038 [2024-05-14 04:20:31.518139] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:17.038 [2024-05-14 04:20:31.518163] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:17.038 [2024-05-14 04:20:31.527607] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:17.038 [2024-05-14 04:20:31.527631] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:17.038 [2024-05-14 04:20:31.536447] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:17.038 [2024-05-14 04:20:31.536475] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:17.038 [2024-05-14 04:20:31.545859] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:17.038 [2024-05-14 04:20:31.545883] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:17.038 [2024-05-14 04:20:31.554657] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:17.038 [2024-05-14 04:20:31.554683] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:17.038 [2024-05-14 04:20:31.563923] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:17.038 [2024-05-14 04:20:31.563949] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:17.038 [2024-05-14 04:20:31.573270] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:17.038 [2024-05-14 04:20:31.573297] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:17.038 [2024-05-14 04:20:31.582488] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:17.038 [2024-05-14 04:20:31.582513] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:17.038 [2024-05-14 04:20:31.591695] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:17.038 [2024-05-14 04:20:31.591722] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:17.038 [2024-05-14 04:20:31.600593] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:17.038 [2024-05-14 04:20:31.600618] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:17.038 [2024-05-14 04:20:31.610417] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:17.038 [2024-05-14 04:20:31.610445] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:17.038 [2024-05-14 04:20:31.619931] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:17.038 [2024-05-14 04:20:31.619959] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:17.298 [2024-05-14 04:20:31.629434] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:17.298 [2024-05-14 04:20:31.629461] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:17.298 [2024-05-14 04:20:31.638621] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:17.298 [2024-05-14 04:20:31.638647] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:17.298 [2024-05-14 04:20:31.647982] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:17.298 [2024-05-14 04:20:31.648007] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:17.298 [2024-05-14 04:20:31.656232] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:17.298 [2024-05-14 04:20:31.656255] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:17.298 [2024-05-14 04:20:31.665299] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:17.298 [2024-05-14 04:20:31.665326] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:17.298 [2024-05-14 04:20:31.674599] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:17.298 [2024-05-14 04:20:31.674625] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:17.298 [2024-05-14 04:20:31.683961] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:17.298 [2024-05-14 04:20:31.683987] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:17.298 [2024-05-14 04:20:31.693414] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:17.298 [2024-05-14 04:20:31.693440] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:17.298 [2024-05-14 04:20:31.702556] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:17.298 [2024-05-14 04:20:31.702581] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:17.298 [2024-05-14 04:20:31.711903] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:17.298 [2024-05-14 04:20:31.711930] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:17.298 [2024-05-14 04:20:31.721092] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:17.298 [2024-05-14 04:20:31.721117] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:17.298 [2024-05-14 04:20:31.729798] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:17.298 [2024-05-14 04:20:31.729822] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:17.298 [2024-05-14 04:20:31.739025] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:17.298 [2024-05-14 04:20:31.739049] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:17.298 [2024-05-14 04:20:31.748051] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:17.298 [2024-05-14 04:20:31.748076] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:17.298 [2024-05-14 04:20:31.757677] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:17.298 [2024-05-14 04:20:31.757700] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:17.298 [2024-05-14 04:20:31.766540] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:17.298 [2024-05-14 04:20:31.766564] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:17.298 [2024-05-14 04:20:31.776082] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:17.298 [2024-05-14 04:20:31.776110] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:17.298 [2024-05-14 04:20:31.785149] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:17.298 [2024-05-14 04:20:31.785174] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:17.298 [2024-05-14 04:20:31.794444] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:17.298 [2024-05-14 04:20:31.794471] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:17.298 [2024-05-14 04:20:31.803633] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:17.298 [2024-05-14 04:20:31.803660] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:17.298 [2024-05-14 04:20:31.812866] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:17.298 [2024-05-14 04:20:31.812891] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:17.298 [2024-05-14 04:20:31.821727] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:17.298 [2024-05-14 04:20:31.821753] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:17.298 [2024-05-14 04:20:31.830551] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:17.298 [2024-05-14 04:20:31.830578] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:17.298 [2024-05-14 04:20:31.839998] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:17.298 [2024-05-14 04:20:31.840025] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:17.298 [2024-05-14 04:20:31.849377] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:17.298 [2024-05-14 04:20:31.849404] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:17.298 [2024-05-14 04:20:31.858341] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:17.298 [2024-05-14 04:20:31.858368] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:17.298 [2024-05-14 04:20:31.867108] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:17.298 [2024-05-14 04:20:31.867135] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:17.298 [2024-05-14 04:20:31.875827] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:17.298 [2024-05-14 04:20:31.875854] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:17.559 [2024-05-14 04:20:31.885107] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:17.559 [2024-05-14 04:20:31.885135] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:17.559 [2024-05-14 04:20:31.894407] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:17.559 [2024-05-14 04:20:31.894432] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:17.559 [2024-05-14 04:20:31.903298] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:17.559 [2024-05-14 04:20:31.903324] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:17.559 [2024-05-14 04:20:31.911911] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:17.559 [2024-05-14 04:20:31.911936] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:17.559 [2024-05-14 04:20:31.920647] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:17.559 [2024-05-14 04:20:31.920675] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:17.559 [2024-05-14 04:20:31.930224] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:17.559 [2024-05-14 04:20:31.930249] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:17.559 [2024-05-14 04:20:31.939735] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:17.559 [2024-05-14 04:20:31.939761] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:17.560 [2024-05-14 04:20:31.948754] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:17.560 [2024-05-14 04:20:31.948780] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:17.560 [2024-05-14 04:20:31.957974] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:17.560 [2024-05-14 04:20:31.957998] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:17.560 [2024-05-14 04:20:31.967535] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:17.560 [2024-05-14 04:20:31.967561] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:17.560 [2024-05-14 04:20:31.976520] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:17.560 [2024-05-14 04:20:31.976546] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:17.560 [2024-05-14 04:20:31.985845] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:17.560 [2024-05-14 04:20:31.985870] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:17.560 [2024-05-14 04:20:31.995208] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:17.560 [2024-05-14 04:20:31.995232] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:17.560 [2024-05-14 04:20:32.004053] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:17.560 [2024-05-14 04:20:32.004080] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:17.560 [2024-05-14 04:20:32.012854] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:17.560 [2024-05-14 04:20:32.012879] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:17.560 [2024-05-14 04:20:32.022052] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:17.560 [2024-05-14 04:20:32.022078] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:17.560 [2024-05-14 04:20:32.031036] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:17.560 [2024-05-14 04:20:32.031061] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:17.560 [2024-05-14 04:20:32.040587] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:17.560 [2024-05-14 04:20:32.040613] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:17.560 [2024-05-14 04:20:32.049470] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:17.560 [2024-05-14 04:20:32.049495] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:17.560 [2024-05-14 04:20:32.058835] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:17.560 [2024-05-14 04:20:32.058862] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:17.560 [2024-05-14 04:20:32.068218] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:17.560 [2024-05-14 04:20:32.068244] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:17.560 [2024-05-14 04:20:32.076932] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:17.560 [2024-05-14 04:20:32.076959] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:17.560 [2024-05-14 04:20:32.086335] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:17.560 [2024-05-14 04:20:32.086360] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:17.560 [2024-05-14 04:20:32.095165] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:17.560 [2024-05-14 04:20:32.095198] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:17.560 [2024-05-14 04:20:32.103818] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:17.560 [2024-05-14 04:20:32.103842] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:17.560 [2024-05-14 04:20:32.113156] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:17.560 [2024-05-14 04:20:32.113183] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:17.560 [2024-05-14 04:20:32.121943] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:17.560 [2024-05-14 04:20:32.121968] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:17.560 [2024-05-14 04:20:32.130707] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:17.560 [2024-05-14 04:20:32.130732] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:17.560 [2024-05-14 04:20:32.139551] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:17.560 [2024-05-14 04:20:32.139575] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:17.822 [2024-05-14 04:20:32.148833] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:17.822 [2024-05-14 04:20:32.148861] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:17.822 [2024-05-14 04:20:32.158097] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:17.822 [2024-05-14 04:20:32.158122] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:17.822 [2024-05-14 04:20:32.166282] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:17.822 [2024-05-14 04:20:32.166307] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:17.822 [2024-05-14 04:20:32.175658] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:17.822 [2024-05-14 04:20:32.175681] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:17.822 [2024-05-14 04:20:32.184933] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:17.822 [2024-05-14 04:20:32.184956] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:17.822 [2024-05-14 04:20:32.193540] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:17.822 [2024-05-14 04:20:32.193566] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:17.822 [2024-05-14 04:20:32.202254] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:17.822 [2024-05-14 04:20:32.202278] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:17.822 [2024-05-14 04:20:32.211723] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:17.822 [2024-05-14 04:20:32.211748] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:17.822 [2024-05-14 04:20:32.220860] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:17.822 [2024-05-14 04:20:32.220886] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:17.822 [2024-05-14 04:20:32.230259] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:17.822 [2024-05-14 04:20:32.230284] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:17.822 [2024-05-14 04:20:32.238605] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:17.822 [2024-05-14 04:20:32.238631] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:17.822 [2024-05-14 04:20:32.247788] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:17.822 [2024-05-14 04:20:32.247813] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:17.822 [2024-05-14 04:20:32.256656] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:17.822 [2024-05-14 04:20:32.256680] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:17.822 [2024-05-14 04:20:32.265905] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:17.822 [2024-05-14 04:20:32.265929] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:17.822 [2024-05-14 04:20:32.274755] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:17.822 [2024-05-14 04:20:32.274780] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:17.822 [2024-05-14 04:20:32.283445] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:17.822 [2024-05-14 04:20:32.283470] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:17.822 [2024-05-14 04:20:32.291284] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:17.822 [2024-05-14 04:20:32.291311] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:17.822 00:23:17.822 Latency(us) 00:23:17.822 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:17.822 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:23:17.822 Nvme1n1 : 5.01 18034.85 140.90 0.00 0.00 7090.60 3190.57 15659.65 00:23:17.822 =================================================================================================================== 00:23:17.822 Total : 18034.85 140.90 0.00 0.00 7090.60 3190.57 15659.65 00:23:17.822 [2024-05-14 04:20:32.297860] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:17.822 [2024-05-14 04:20:32.297884] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:17.822 [2024-05-14 04:20:32.305872] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:17.822 [2024-05-14 04:20:32.305897] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:17.822 [2024-05-14 04:20:32.313846] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:17.822 [2024-05-14 04:20:32.313861] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:17.822 [2024-05-14 04:20:32.321864] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:17.822 [2024-05-14 04:20:32.321878] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:17.822 [2024-05-14 04:20:32.329857] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:17.822 [2024-05-14 04:20:32.329871] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:17.822 [2024-05-14 04:20:32.337848] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:17.822 [2024-05-14 04:20:32.337862] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:17.822 [2024-05-14 04:20:32.345857] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:17.822 [2024-05-14 04:20:32.345870] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:17.822 [2024-05-14 04:20:32.353863] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:17.822 [2024-05-14 04:20:32.353876] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:17.822 [2024-05-14 04:20:32.361869] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:17.822 [2024-05-14 04:20:32.361883] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:17.822 [2024-05-14 04:20:32.369884] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:17.822 [2024-05-14 04:20:32.369898] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:17.822 [2024-05-14 04:20:32.377858] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:17.822 [2024-05-14 04:20:32.377871] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:17.822 [2024-05-14 04:20:32.385876] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:17.822 [2024-05-14 04:20:32.385890] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:17.822 [2024-05-14 04:20:32.393872] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:17.822 [2024-05-14 04:20:32.393886] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:17.822 [2024-05-14 04:20:32.401865] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:17.822 [2024-05-14 04:20:32.401878] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:18.082 [2024-05-14 04:20:32.409875] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:18.082 [2024-05-14 04:20:32.409891] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:18.082 [2024-05-14 04:20:32.417881] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:18.082 [2024-05-14 04:20:32.417895] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:18.082 [2024-05-14 04:20:32.425875] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:18.082 [2024-05-14 04:20:32.425888] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:18.082 [2024-05-14 04:20:32.433888] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:18.082 [2024-05-14 04:20:32.433907] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:18.082 [2024-05-14 04:20:32.441886] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:18.082 [2024-05-14 04:20:32.441901] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:18.082 [2024-05-14 04:20:32.449893] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:18.082 [2024-05-14 04:20:32.449907] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:18.082 [2024-05-14 04:20:32.457903] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:18.082 [2024-05-14 04:20:32.457916] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:18.082 [2024-05-14 04:20:32.465885] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:18.082 [2024-05-14 04:20:32.465898] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:18.082 [2024-05-14 04:20:32.473899] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:18.082 [2024-05-14 04:20:32.473913] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:18.082 [2024-05-14 04:20:32.481908] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:18.083 [2024-05-14 04:20:32.481926] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:18.083 [2024-05-14 04:20:32.489894] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:18.083 [2024-05-14 04:20:32.489909] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:18.083 [2024-05-14 04:20:32.497903] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:18.083 [2024-05-14 04:20:32.497917] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:18.083 [2024-05-14 04:20:32.505898] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:18.083 [2024-05-14 04:20:32.505911] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:18.083 [2024-05-14 04:20:32.513906] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:18.083 [2024-05-14 04:20:32.513919] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:18.083 [2024-05-14 04:20:32.521914] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:18.083 [2024-05-14 04:20:32.521928] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:18.083 [2024-05-14 04:20:32.529906] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:18.083 [2024-05-14 04:20:32.529921] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:18.083 [2024-05-14 04:20:32.537916] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:18.083 [2024-05-14 04:20:32.537931] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:18.083 [2024-05-14 04:20:32.545923] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:18.083 [2024-05-14 04:20:32.545938] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:18.083 [2024-05-14 04:20:32.553923] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:18.083 [2024-05-14 04:20:32.553936] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:18.083 [2024-05-14 04:20:32.561919] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:18.083 [2024-05-14 04:20:32.561934] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:18.083 [2024-05-14 04:20:32.569913] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:18.083 [2024-05-14 04:20:32.569926] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:18.083 [2024-05-14 04:20:32.577925] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:18.083 [2024-05-14 04:20:32.577940] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:18.083 [2024-05-14 04:20:32.585928] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:18.083 [2024-05-14 04:20:32.585947] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:18.083 [2024-05-14 04:20:32.593927] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:18.083 [2024-05-14 04:20:32.593943] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:18.083 [2024-05-14 04:20:32.601936] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:18.083 [2024-05-14 04:20:32.601949] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:18.083 [2024-05-14 04:20:32.609935] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:18.083 [2024-05-14 04:20:32.609948] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:18.083 [2024-05-14 04:20:32.617933] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:18.083 [2024-05-14 04:20:32.617947] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:18.083 [2024-05-14 04:20:32.625943] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:18.083 [2024-05-14 04:20:32.625956] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:18.083 [2024-05-14 04:20:32.633948] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:18.083 [2024-05-14 04:20:32.633965] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:18.083 [2024-05-14 04:20:32.641948] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:18.083 [2024-05-14 04:20:32.641962] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:18.083 [2024-05-14 04:20:32.649957] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:18.083 [2024-05-14 04:20:32.649970] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:18.083 [2024-05-14 04:20:32.657954] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:18.083 [2024-05-14 04:20:32.657968] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:18.083 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (4067610) - No such process 00:23:18.083 04:20:32 -- target/zcopy.sh@49 -- # wait 4067610 00:23:18.083 04:20:32 -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:23:18.083 04:20:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:18.083 04:20:32 -- common/autotest_common.sh@10 -- # set +x 00:23:18.342 04:20:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:18.342 04:20:32 -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:23:18.342 04:20:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:18.342 04:20:32 -- common/autotest_common.sh@10 -- # set +x 00:23:18.342 delay0 00:23:18.342 04:20:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:18.342 04:20:32 -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:23:18.342 04:20:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:18.342 04:20:32 -- common/autotest_common.sh@10 -- # set +x 00:23:18.342 04:20:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:18.342 04:20:32 -- target/zcopy.sh@56 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:23:18.342 EAL: No free 2048 kB hugepages reported on node 1 00:23:18.342 [2024-05-14 04:20:32.811786] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:23:24.978 [2024-05-14 04:20:38.909345] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:23:24.978 Initializing NVMe Controllers 00:23:24.978 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:23:24.978 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:23:24.978 Initialization complete. Launching workers. 00:23:24.978 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 320, failed: 96 00:23:24.978 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 376, failed to submit 40 00:23:24.978 success 178, unsuccess 198, failed 0 00:23:24.978 04:20:38 -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:23:24.978 04:20:38 -- target/zcopy.sh@60 -- # nvmftestfini 00:23:24.978 04:20:38 -- nvmf/common.sh@476 -- # nvmfcleanup 00:23:24.978 04:20:38 -- nvmf/common.sh@116 -- # sync 00:23:24.978 04:20:38 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:23:24.978 04:20:38 -- nvmf/common.sh@119 -- # set +e 00:23:24.978 04:20:38 -- nvmf/common.sh@120 -- # for i in {1..20} 00:23:24.978 04:20:38 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:23:24.978 rmmod nvme_tcp 00:23:24.978 rmmod nvme_fabrics 00:23:24.978 rmmod nvme_keyring 00:23:24.978 04:20:38 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:23:24.978 04:20:38 -- nvmf/common.sh@123 -- # set -e 00:23:24.978 04:20:38 -- nvmf/common.sh@124 -- # return 0 00:23:24.978 04:20:38 -- nvmf/common.sh@477 -- # '[' -n 4065456 ']' 00:23:24.978 04:20:38 -- nvmf/common.sh@478 -- # killprocess 4065456 00:23:24.978 04:20:38 -- common/autotest_common.sh@926 -- # '[' -z 4065456 ']' 00:23:24.978 04:20:38 -- common/autotest_common.sh@930 -- # kill -0 4065456 00:23:24.978 04:20:38 -- common/autotest_common.sh@931 -- # uname 00:23:24.979 04:20:38 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:23:24.979 04:20:38 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 4065456 00:23:24.979 04:20:39 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:23:24.979 04:20:39 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:23:24.979 04:20:39 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 4065456' 00:23:24.979 killing process with pid 4065456 00:23:24.979 04:20:39 -- common/autotest_common.sh@945 -- # kill 4065456 00:23:24.979 04:20:39 -- common/autotest_common.sh@950 -- # wait 4065456 00:23:24.979 04:20:39 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:23:24.979 04:20:39 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:23:24.979 04:20:39 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:23:24.979 04:20:39 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:24.979 04:20:39 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:23:24.979 04:20:39 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:24.979 04:20:39 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:24.979 04:20:39 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:27.516 04:20:41 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:23:27.516 00:23:27.516 real 0m32.498s 00:23:27.516 user 0m46.713s 00:23:27.516 sys 0m7.908s 00:23:27.516 04:20:41 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:23:27.516 04:20:41 -- common/autotest_common.sh@10 -- # set +x 00:23:27.516 ************************************ 00:23:27.516 END TEST nvmf_zcopy 00:23:27.516 ************************************ 00:23:27.516 04:20:41 -- nvmf/nvmf.sh@53 -- # run_test nvmf_nmic /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:23:27.516 04:20:41 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:23:27.516 04:20:41 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:23:27.516 04:20:41 -- common/autotest_common.sh@10 -- # set +x 00:23:27.516 ************************************ 00:23:27.516 START TEST nvmf_nmic 00:23:27.516 ************************************ 00:23:27.516 04:20:41 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:23:27.516 * Looking for test storage... 00:23:27.516 * Found test storage at /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target 00:23:27.516 04:20:41 -- target/nmic.sh@9 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/common.sh 00:23:27.516 04:20:41 -- nvmf/common.sh@7 -- # uname -s 00:23:27.516 04:20:41 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:27.516 04:20:41 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:27.516 04:20:41 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:27.516 04:20:41 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:27.516 04:20:41 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:27.516 04:20:41 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:27.516 04:20:41 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:27.516 04:20:41 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:27.516 04:20:41 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:27.516 04:20:41 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:27.516 04:20:41 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80ef6226-405e-ee11-906e-a4bf01973fda 00:23:27.516 04:20:41 -- nvmf/common.sh@18 -- # NVME_HOSTID=80ef6226-405e-ee11-906e-a4bf01973fda 00:23:27.516 04:20:41 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:27.516 04:20:41 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:27.516 04:20:41 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:23:27.516 04:20:41 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/common.sh 00:23:27.516 04:20:41 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:27.516 04:20:41 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:27.516 04:20:41 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:27.516 04:20:41 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:27.516 04:20:41 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:27.516 04:20:41 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:27.516 04:20:41 -- paths/export.sh@5 -- # export PATH 00:23:27.517 04:20:41 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:27.517 04:20:41 -- nvmf/common.sh@46 -- # : 0 00:23:27.517 04:20:41 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:23:27.517 04:20:41 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:23:27.517 04:20:41 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:23:27.517 04:20:41 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:27.517 04:20:41 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:27.517 04:20:41 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:23:27.517 04:20:41 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:23:27.517 04:20:41 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:23:27.517 04:20:41 -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:23:27.517 04:20:41 -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:23:27.517 04:20:41 -- target/nmic.sh@14 -- # nvmftestinit 00:23:27.517 04:20:41 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:23:27.517 04:20:41 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:27.517 04:20:41 -- nvmf/common.sh@436 -- # prepare_net_devs 00:23:27.517 04:20:41 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:23:27.517 04:20:41 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:23:27.517 04:20:41 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:27.517 04:20:41 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:27.517 04:20:41 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:27.517 04:20:41 -- nvmf/common.sh@402 -- # [[ phy-fallback != virt ]] 00:23:27.517 04:20:41 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:23:27.517 04:20:41 -- nvmf/common.sh@284 -- # xtrace_disable 00:23:27.517 04:20:41 -- common/autotest_common.sh@10 -- # set +x 00:23:34.083 04:20:47 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:23:34.083 04:20:47 -- nvmf/common.sh@290 -- # pci_devs=() 00:23:34.083 04:20:47 -- nvmf/common.sh@290 -- # local -a pci_devs 00:23:34.083 04:20:47 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:23:34.083 04:20:47 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:23:34.083 04:20:47 -- nvmf/common.sh@292 -- # pci_drivers=() 00:23:34.083 04:20:47 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:23:34.083 04:20:47 -- nvmf/common.sh@294 -- # net_devs=() 00:23:34.083 04:20:47 -- nvmf/common.sh@294 -- # local -ga net_devs 00:23:34.083 04:20:47 -- nvmf/common.sh@295 -- # e810=() 00:23:34.083 04:20:47 -- nvmf/common.sh@295 -- # local -ga e810 00:23:34.083 04:20:47 -- nvmf/common.sh@296 -- # x722=() 00:23:34.083 04:20:47 -- nvmf/common.sh@296 -- # local -ga x722 00:23:34.083 04:20:47 -- nvmf/common.sh@297 -- # mlx=() 00:23:34.083 04:20:47 -- nvmf/common.sh@297 -- # local -ga mlx 00:23:34.083 04:20:47 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:34.083 04:20:47 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:34.083 04:20:47 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:34.083 04:20:47 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:34.083 04:20:47 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:34.083 04:20:47 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:34.083 04:20:47 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:34.083 04:20:47 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:34.083 04:20:47 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:34.083 04:20:47 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:34.083 04:20:47 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:34.083 04:20:47 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:23:34.083 04:20:47 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:23:34.083 04:20:47 -- nvmf/common.sh@326 -- # [[ '' == mlx5 ]] 00:23:34.083 04:20:47 -- nvmf/common.sh@328 -- # [[ '' == e810 ]] 00:23:34.083 04:20:47 -- nvmf/common.sh@330 -- # [[ '' == x722 ]] 00:23:34.083 04:20:47 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:23:34.083 04:20:47 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:23:34.083 04:20:47 -- nvmf/common.sh@340 -- # echo 'Found 0000:27:00.0 (0x8086 - 0x159b)' 00:23:34.083 Found 0000:27:00.0 (0x8086 - 0x159b) 00:23:34.083 04:20:47 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:23:34.083 04:20:47 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:23:34.083 04:20:47 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:34.083 04:20:47 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:34.083 04:20:47 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:23:34.083 04:20:47 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:23:34.083 04:20:47 -- nvmf/common.sh@340 -- # echo 'Found 0000:27:00.1 (0x8086 - 0x159b)' 00:23:34.083 Found 0000:27:00.1 (0x8086 - 0x159b) 00:23:34.083 04:20:47 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:23:34.083 04:20:47 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:23:34.083 04:20:47 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:34.083 04:20:47 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:34.083 04:20:47 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:23:34.083 04:20:47 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:23:34.083 04:20:47 -- nvmf/common.sh@371 -- # [[ '' == e810 ]] 00:23:34.083 04:20:47 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:23:34.083 04:20:47 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:34.083 04:20:47 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:23:34.083 04:20:47 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:34.083 04:20:47 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:27:00.0: cvl_0_0' 00:23:34.083 Found net devices under 0000:27:00.0: cvl_0_0 00:23:34.083 04:20:47 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:23:34.083 04:20:47 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:23:34.083 04:20:47 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:34.083 04:20:47 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:23:34.083 04:20:47 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:34.083 04:20:47 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:27:00.1: cvl_0_1' 00:23:34.083 Found net devices under 0000:27:00.1: cvl_0_1 00:23:34.083 04:20:47 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:23:34.083 04:20:47 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:23:34.083 04:20:47 -- nvmf/common.sh@402 -- # is_hw=yes 00:23:34.083 04:20:47 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:23:34.083 04:20:47 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:23:34.083 04:20:47 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:23:34.083 04:20:47 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:34.083 04:20:47 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:34.083 04:20:47 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:34.083 04:20:47 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:23:34.083 04:20:47 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:34.083 04:20:47 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:34.083 04:20:47 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:23:34.083 04:20:47 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:34.083 04:20:47 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:34.083 04:20:47 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:23:34.083 04:20:47 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:23:34.083 04:20:47 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:23:34.083 04:20:47 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:34.084 04:20:47 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:34.084 04:20:47 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:34.084 04:20:47 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:23:34.084 04:20:47 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:34.084 04:20:47 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:34.084 04:20:47 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:34.084 04:20:47 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:23:34.084 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:34.084 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.370 ms 00:23:34.084 00:23:34.084 --- 10.0.0.2 ping statistics --- 00:23:34.084 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:34.084 rtt min/avg/max/mdev = 0.370/0.370/0.370/0.000 ms 00:23:34.084 04:20:47 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:34.084 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:34.084 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.406 ms 00:23:34.084 00:23:34.084 --- 10.0.0.1 ping statistics --- 00:23:34.084 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:34.084 rtt min/avg/max/mdev = 0.406/0.406/0.406/0.000 ms 00:23:34.084 04:20:47 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:34.084 04:20:47 -- nvmf/common.sh@410 -- # return 0 00:23:34.084 04:20:47 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:23:34.084 04:20:47 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:34.084 04:20:47 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:23:34.084 04:20:47 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:23:34.084 04:20:47 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:34.084 04:20:47 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:23:34.084 04:20:47 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:23:34.084 04:20:47 -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:23:34.084 04:20:47 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:23:34.084 04:20:47 -- common/autotest_common.sh@712 -- # xtrace_disable 00:23:34.084 04:20:47 -- common/autotest_common.sh@10 -- # set +x 00:23:34.084 04:20:47 -- nvmf/common.sh@469 -- # nvmfpid=4074217 00:23:34.084 04:20:47 -- nvmf/common.sh@470 -- # waitforlisten 4074217 00:23:34.084 04:20:47 -- common/autotest_common.sh@819 -- # '[' -z 4074217 ']' 00:23:34.084 04:20:47 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:34.084 04:20:47 -- common/autotest_common.sh@824 -- # local max_retries=100 00:23:34.084 04:20:47 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:23:34.084 04:20:47 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:34.084 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:34.084 04:20:47 -- common/autotest_common.sh@828 -- # xtrace_disable 00:23:34.084 04:20:47 -- common/autotest_common.sh@10 -- # set +x 00:23:34.084 [2024-05-14 04:20:47.935855] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:23:34.084 [2024-05-14 04:20:47.935988] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:34.084 EAL: No free 2048 kB hugepages reported on node 1 00:23:34.084 [2024-05-14 04:20:48.069522] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:34.084 [2024-05-14 04:20:48.164848] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:23:34.084 [2024-05-14 04:20:48.165051] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:34.084 [2024-05-14 04:20:48.165067] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:34.084 [2024-05-14 04:20:48.165078] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:34.084 [2024-05-14 04:20:48.165143] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:23:34.084 [2024-05-14 04:20:48.165171] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:23:34.084 [2024-05-14 04:20:48.165214] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:34.084 [2024-05-14 04:20:48.165219] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:23:34.084 04:20:48 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:23:34.084 04:20:48 -- common/autotest_common.sh@852 -- # return 0 00:23:34.084 04:20:48 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:23:34.084 04:20:48 -- common/autotest_common.sh@718 -- # xtrace_disable 00:23:34.084 04:20:48 -- common/autotest_common.sh@10 -- # set +x 00:23:34.343 04:20:48 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:34.343 04:20:48 -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:34.343 04:20:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:34.343 04:20:48 -- common/autotest_common.sh@10 -- # set +x 00:23:34.343 [2024-05-14 04:20:48.684247] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:34.343 04:20:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:34.343 04:20:48 -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:23:34.343 04:20:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:34.343 04:20:48 -- common/autotest_common.sh@10 -- # set +x 00:23:34.343 Malloc0 00:23:34.343 04:20:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:34.343 04:20:48 -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:23:34.343 04:20:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:34.343 04:20:48 -- common/autotest_common.sh@10 -- # set +x 00:23:34.343 04:20:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:34.343 04:20:48 -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:23:34.343 04:20:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:34.343 04:20:48 -- common/autotest_common.sh@10 -- # set +x 00:23:34.343 04:20:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:34.343 04:20:48 -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:34.343 04:20:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:34.343 04:20:48 -- common/autotest_common.sh@10 -- # set +x 00:23:34.343 [2024-05-14 04:20:48.753381] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:34.343 04:20:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:34.343 04:20:48 -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:23:34.343 test case1: single bdev can't be used in multiple subsystems 00:23:34.343 04:20:48 -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:23:34.343 04:20:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:34.343 04:20:48 -- common/autotest_common.sh@10 -- # set +x 00:23:34.343 04:20:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:34.343 04:20:48 -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:23:34.343 04:20:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:34.343 04:20:48 -- common/autotest_common.sh@10 -- # set +x 00:23:34.343 04:20:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:34.343 04:20:48 -- target/nmic.sh@28 -- # nmic_status=0 00:23:34.343 04:20:48 -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:23:34.343 04:20:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:34.343 04:20:48 -- common/autotest_common.sh@10 -- # set +x 00:23:34.343 [2024-05-14 04:20:48.777104] bdev.c:7935:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:23:34.343 [2024-05-14 04:20:48.777135] subsystem.c:1779:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:23:34.344 [2024-05-14 04:20:48.777148] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:34.344 request: 00:23:34.344 { 00:23:34.344 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:23:34.344 "namespace": { 00:23:34.344 "bdev_name": "Malloc0" 00:23:34.344 }, 00:23:34.344 "method": "nvmf_subsystem_add_ns", 00:23:34.344 "req_id": 1 00:23:34.344 } 00:23:34.344 Got JSON-RPC error response 00:23:34.344 response: 00:23:34.344 { 00:23:34.344 "code": -32602, 00:23:34.344 "message": "Invalid parameters" 00:23:34.344 } 00:23:34.344 04:20:48 -- common/autotest_common.sh@579 -- # [[ 1 == 0 ]] 00:23:34.344 04:20:48 -- target/nmic.sh@29 -- # nmic_status=1 00:23:34.344 04:20:48 -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:23:34.344 04:20:48 -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:23:34.344 Adding namespace failed - expected result. 00:23:34.344 04:20:48 -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:23:34.344 test case2: host connect to nvmf target in multiple paths 00:23:34.344 04:20:48 -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:23:34.344 04:20:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:34.344 04:20:48 -- common/autotest_common.sh@10 -- # set +x 00:23:34.344 [2024-05-14 04:20:48.785251] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:23:34.344 04:20:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:34.344 04:20:48 -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80ef6226-405e-ee11-906e-a4bf01973fda --hostid=80ef6226-405e-ee11-906e-a4bf01973fda -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:23:35.725 04:20:50 -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80ef6226-405e-ee11-906e-a4bf01973fda --hostid=80ef6226-405e-ee11-906e-a4bf01973fda -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:23:37.626 04:20:51 -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:23:37.626 04:20:51 -- common/autotest_common.sh@1177 -- # local i=0 00:23:37.626 04:20:51 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:23:37.626 04:20:51 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:23:37.626 04:20:51 -- common/autotest_common.sh@1184 -- # sleep 2 00:23:39.529 04:20:53 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:23:39.529 04:20:53 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:23:39.529 04:20:53 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:23:39.529 04:20:53 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:23:39.529 04:20:53 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:23:39.529 04:20:53 -- common/autotest_common.sh@1187 -- # return 0 00:23:39.530 04:20:53 -- target/nmic.sh@46 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:23:39.530 [global] 00:23:39.530 thread=1 00:23:39.530 invalidate=1 00:23:39.530 rw=write 00:23:39.530 time_based=1 00:23:39.530 runtime=1 00:23:39.530 ioengine=libaio 00:23:39.530 direct=1 00:23:39.530 bs=4096 00:23:39.530 iodepth=1 00:23:39.530 norandommap=0 00:23:39.530 numjobs=1 00:23:39.530 00:23:39.530 verify_dump=1 00:23:39.530 verify_backlog=512 00:23:39.530 verify_state_save=0 00:23:39.530 do_verify=1 00:23:39.530 verify=crc32c-intel 00:23:39.530 [job0] 00:23:39.530 filename=/dev/nvme0n1 00:23:39.530 Could not set queue depth (nvme0n1) 00:23:39.790 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:23:39.790 fio-3.35 00:23:39.790 Starting 1 thread 00:23:40.725 00:23:40.725 job0: (groupid=0, jobs=1): err= 0: pid=4075633: Tue May 14 04:20:55 2024 00:23:40.725 read: IOPS=2007, BW=8032KiB/s (8225kB/s)(8040KiB/1001msec) 00:23:40.725 slat (nsec): min=3289, max=52219, avg=8859.82, stdev=9280.97 00:23:40.725 clat (usec): min=181, max=556, avg=259.15, stdev=82.07 00:23:40.725 lat (usec): min=184, max=592, avg=268.01, stdev=89.90 00:23:40.725 clat percentiles (usec): 00:23:40.725 | 1.00th=[ 194], 5.00th=[ 198], 10.00th=[ 200], 20.00th=[ 202], 00:23:40.725 | 30.00th=[ 204], 40.00th=[ 206], 50.00th=[ 208], 60.00th=[ 215], 00:23:40.725 | 70.00th=[ 281], 80.00th=[ 310], 90.00th=[ 416], 95.00th=[ 424], 00:23:40.725 | 99.00th=[ 445], 99.50th=[ 453], 99.90th=[ 545], 99.95th=[ 553], 00:23:40.725 | 99.99th=[ 553] 00:23:40.725 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:23:40.725 slat (nsec): min=4654, max=70986, avg=12243.18, stdev=11222.64 00:23:40.725 clat (usec): min=114, max=608, avg=207.86, stdev=61.94 00:23:40.725 lat (usec): min=119, max=679, avg=220.11, stdev=69.76 00:23:40.725 clat percentiles (usec): 00:23:40.725 | 1.00th=[ 117], 5.00th=[ 119], 10.00th=[ 122], 20.00th=[ 172], 00:23:40.725 | 30.00th=[ 198], 40.00th=[ 200], 50.00th=[ 202], 60.00th=[ 204], 00:23:40.725 | 70.00th=[ 206], 80.00th=[ 225], 90.00th=[ 318], 95.00th=[ 334], 00:23:40.725 | 99.00th=[ 392], 99.50th=[ 420], 99.90th=[ 465], 99.95th=[ 474], 00:23:40.725 | 99.99th=[ 611] 00:23:40.725 bw ( KiB/s): min= 8192, max= 8192, per=100.00%, avg=8192.00, stdev= 0.00, samples=1 00:23:40.725 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:23:40.725 lat (usec) : 250=72.28%, 500=27.62%, 750=0.10% 00:23:40.725 cpu : usr=2.10%, sys=4.00%, ctx=4058, majf=0, minf=1 00:23:40.725 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:23:40.725 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:40.725 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:40.725 issued rwts: total=2010,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:40.725 latency : target=0, window=0, percentile=100.00%, depth=1 00:23:40.725 00:23:40.725 Run status group 0 (all jobs): 00:23:40.725 READ: bw=8032KiB/s (8225kB/s), 8032KiB/s-8032KiB/s (8225kB/s-8225kB/s), io=8040KiB (8233kB), run=1001-1001msec 00:23:40.725 WRITE: bw=8184KiB/s (8380kB/s), 8184KiB/s-8184KiB/s (8380kB/s-8380kB/s), io=8192KiB (8389kB), run=1001-1001msec 00:23:40.725 00:23:40.725 Disk stats (read/write): 00:23:40.725 nvme0n1: ios=1586/2041, merge=0/0, ticks=659/414, in_queue=1073, util=96.09% 00:23:40.725 04:20:55 -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:23:41.292 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:23:41.292 04:20:55 -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:23:41.293 04:20:55 -- common/autotest_common.sh@1198 -- # local i=0 00:23:41.293 04:20:55 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:23:41.293 04:20:55 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:23:41.293 04:20:55 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:23:41.293 04:20:55 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:23:41.293 04:20:55 -- common/autotest_common.sh@1210 -- # return 0 00:23:41.293 04:20:55 -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:23:41.293 04:20:55 -- target/nmic.sh@53 -- # nvmftestfini 00:23:41.293 04:20:55 -- nvmf/common.sh@476 -- # nvmfcleanup 00:23:41.293 04:20:55 -- nvmf/common.sh@116 -- # sync 00:23:41.293 04:20:55 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:23:41.293 04:20:55 -- nvmf/common.sh@119 -- # set +e 00:23:41.293 04:20:55 -- nvmf/common.sh@120 -- # for i in {1..20} 00:23:41.293 04:20:55 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:23:41.293 rmmod nvme_tcp 00:23:41.293 rmmod nvme_fabrics 00:23:41.293 rmmod nvme_keyring 00:23:41.293 04:20:55 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:23:41.293 04:20:55 -- nvmf/common.sh@123 -- # set -e 00:23:41.293 04:20:55 -- nvmf/common.sh@124 -- # return 0 00:23:41.293 04:20:55 -- nvmf/common.sh@477 -- # '[' -n 4074217 ']' 00:23:41.293 04:20:55 -- nvmf/common.sh@478 -- # killprocess 4074217 00:23:41.293 04:20:55 -- common/autotest_common.sh@926 -- # '[' -z 4074217 ']' 00:23:41.293 04:20:55 -- common/autotest_common.sh@930 -- # kill -0 4074217 00:23:41.293 04:20:55 -- common/autotest_common.sh@931 -- # uname 00:23:41.293 04:20:55 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:23:41.293 04:20:55 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 4074217 00:23:41.293 04:20:55 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:23:41.293 04:20:55 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:23:41.293 04:20:55 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 4074217' 00:23:41.293 killing process with pid 4074217 00:23:41.293 04:20:55 -- common/autotest_common.sh@945 -- # kill 4074217 00:23:41.293 04:20:55 -- common/autotest_common.sh@950 -- # wait 4074217 00:23:41.864 04:20:56 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:23:41.864 04:20:56 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:23:41.864 04:20:56 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:23:41.864 04:20:56 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:41.864 04:20:56 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:23:41.864 04:20:56 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:41.864 04:20:56 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:41.864 04:20:56 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:43.775 04:20:58 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:23:43.775 00:23:43.775 real 0m16.718s 00:23:43.775 user 0m45.171s 00:23:43.775 sys 0m5.445s 00:23:43.775 04:20:58 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:23:43.775 04:20:58 -- common/autotest_common.sh@10 -- # set +x 00:23:43.775 ************************************ 00:23:43.775 END TEST nvmf_nmic 00:23:43.775 ************************************ 00:23:44.034 04:20:58 -- nvmf/nvmf.sh@54 -- # run_test nvmf_fio_target /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:23:44.034 04:20:58 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:23:44.034 04:20:58 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:23:44.034 04:20:58 -- common/autotest_common.sh@10 -- # set +x 00:23:44.034 ************************************ 00:23:44.034 START TEST nvmf_fio_target 00:23:44.034 ************************************ 00:23:44.034 04:20:58 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:23:44.034 * Looking for test storage... 00:23:44.034 * Found test storage at /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target 00:23:44.034 04:20:58 -- target/fio.sh@9 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/common.sh 00:23:44.034 04:20:58 -- nvmf/common.sh@7 -- # uname -s 00:23:44.034 04:20:58 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:44.034 04:20:58 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:44.034 04:20:58 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:44.034 04:20:58 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:44.034 04:20:58 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:44.034 04:20:58 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:44.034 04:20:58 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:44.034 04:20:58 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:44.034 04:20:58 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:44.034 04:20:58 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:44.034 04:20:58 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80ef6226-405e-ee11-906e-a4bf01973fda 00:23:44.034 04:20:58 -- nvmf/common.sh@18 -- # NVME_HOSTID=80ef6226-405e-ee11-906e-a4bf01973fda 00:23:44.034 04:20:58 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:44.034 04:20:58 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:44.034 04:20:58 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:23:44.034 04:20:58 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/common.sh 00:23:44.034 04:20:58 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:44.034 04:20:58 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:44.034 04:20:58 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:44.034 04:20:58 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:44.034 04:20:58 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:44.034 04:20:58 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:44.034 04:20:58 -- paths/export.sh@5 -- # export PATH 00:23:44.034 04:20:58 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:44.034 04:20:58 -- nvmf/common.sh@46 -- # : 0 00:23:44.034 04:20:58 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:23:44.034 04:20:58 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:23:44.034 04:20:58 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:23:44.034 04:20:58 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:44.034 04:20:58 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:44.034 04:20:58 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:23:44.034 04:20:58 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:23:44.034 04:20:58 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:23:44.034 04:20:58 -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:23:44.034 04:20:58 -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:23:44.034 04:20:58 -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py 00:23:44.034 04:20:58 -- target/fio.sh@16 -- # nvmftestinit 00:23:44.034 04:20:58 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:23:44.034 04:20:58 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:44.034 04:20:58 -- nvmf/common.sh@436 -- # prepare_net_devs 00:23:44.034 04:20:58 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:23:44.034 04:20:58 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:23:44.034 04:20:58 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:44.034 04:20:58 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:44.034 04:20:58 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:44.034 04:20:58 -- nvmf/common.sh@402 -- # [[ phy-fallback != virt ]] 00:23:44.034 04:20:58 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:23:44.034 04:20:58 -- nvmf/common.sh@284 -- # xtrace_disable 00:23:44.034 04:20:58 -- common/autotest_common.sh@10 -- # set +x 00:23:49.381 04:21:03 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:23:49.381 04:21:03 -- nvmf/common.sh@290 -- # pci_devs=() 00:23:49.381 04:21:03 -- nvmf/common.sh@290 -- # local -a pci_devs 00:23:49.381 04:21:03 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:23:49.381 04:21:03 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:23:49.381 04:21:03 -- nvmf/common.sh@292 -- # pci_drivers=() 00:23:49.381 04:21:03 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:23:49.381 04:21:03 -- nvmf/common.sh@294 -- # net_devs=() 00:23:49.381 04:21:03 -- nvmf/common.sh@294 -- # local -ga net_devs 00:23:49.381 04:21:03 -- nvmf/common.sh@295 -- # e810=() 00:23:49.381 04:21:03 -- nvmf/common.sh@295 -- # local -ga e810 00:23:49.381 04:21:03 -- nvmf/common.sh@296 -- # x722=() 00:23:49.381 04:21:03 -- nvmf/common.sh@296 -- # local -ga x722 00:23:49.381 04:21:03 -- nvmf/common.sh@297 -- # mlx=() 00:23:49.381 04:21:03 -- nvmf/common.sh@297 -- # local -ga mlx 00:23:49.381 04:21:03 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:49.381 04:21:03 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:49.381 04:21:03 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:49.381 04:21:03 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:49.381 04:21:03 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:49.381 04:21:03 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:49.381 04:21:03 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:49.381 04:21:03 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:49.381 04:21:03 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:49.381 04:21:03 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:49.381 04:21:03 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:49.381 04:21:03 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:23:49.381 04:21:03 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:23:49.381 04:21:03 -- nvmf/common.sh@326 -- # [[ '' == mlx5 ]] 00:23:49.381 04:21:03 -- nvmf/common.sh@328 -- # [[ '' == e810 ]] 00:23:49.381 04:21:03 -- nvmf/common.sh@330 -- # [[ '' == x722 ]] 00:23:49.381 04:21:03 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:23:49.381 04:21:03 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:23:49.381 04:21:03 -- nvmf/common.sh@340 -- # echo 'Found 0000:27:00.0 (0x8086 - 0x159b)' 00:23:49.381 Found 0000:27:00.0 (0x8086 - 0x159b) 00:23:49.381 04:21:03 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:23:49.381 04:21:03 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:23:49.381 04:21:03 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:49.381 04:21:03 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:49.381 04:21:03 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:23:49.381 04:21:03 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:23:49.381 04:21:03 -- nvmf/common.sh@340 -- # echo 'Found 0000:27:00.1 (0x8086 - 0x159b)' 00:23:49.381 Found 0000:27:00.1 (0x8086 - 0x159b) 00:23:49.381 04:21:03 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:23:49.381 04:21:03 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:23:49.381 04:21:03 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:49.381 04:21:03 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:49.381 04:21:03 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:23:49.381 04:21:03 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:23:49.381 04:21:03 -- nvmf/common.sh@371 -- # [[ '' == e810 ]] 00:23:49.381 04:21:03 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:23:49.381 04:21:03 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:49.381 04:21:03 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:23:49.381 04:21:03 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:49.381 04:21:03 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:27:00.0: cvl_0_0' 00:23:49.381 Found net devices under 0000:27:00.0: cvl_0_0 00:23:49.381 04:21:03 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:23:49.381 04:21:03 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:23:49.381 04:21:03 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:49.381 04:21:03 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:23:49.381 04:21:03 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:49.381 04:21:03 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:27:00.1: cvl_0_1' 00:23:49.381 Found net devices under 0000:27:00.1: cvl_0_1 00:23:49.381 04:21:03 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:23:49.381 04:21:03 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:23:49.381 04:21:03 -- nvmf/common.sh@402 -- # is_hw=yes 00:23:49.381 04:21:03 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:23:49.381 04:21:03 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:23:49.381 04:21:03 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:23:49.381 04:21:03 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:49.381 04:21:03 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:49.381 04:21:03 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:49.381 04:21:03 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:23:49.381 04:21:03 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:49.381 04:21:03 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:49.381 04:21:03 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:23:49.381 04:21:03 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:49.381 04:21:03 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:49.381 04:21:03 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:23:49.382 04:21:03 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:23:49.382 04:21:03 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:23:49.382 04:21:03 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:49.382 04:21:03 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:49.382 04:21:03 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:49.382 04:21:03 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:23:49.382 04:21:03 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:49.382 04:21:03 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:49.382 04:21:03 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:49.382 04:21:03 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:23:49.382 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:49.382 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.450 ms 00:23:49.382 00:23:49.382 --- 10.0.0.2 ping statistics --- 00:23:49.382 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:49.382 rtt min/avg/max/mdev = 0.450/0.450/0.450/0.000 ms 00:23:49.382 04:21:03 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:49.382 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:49.382 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.472 ms 00:23:49.382 00:23:49.382 --- 10.0.0.1 ping statistics --- 00:23:49.382 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:49.382 rtt min/avg/max/mdev = 0.472/0.472/0.472/0.000 ms 00:23:49.382 04:21:03 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:49.382 04:21:03 -- nvmf/common.sh@410 -- # return 0 00:23:49.382 04:21:03 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:23:49.382 04:21:03 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:49.382 04:21:03 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:23:49.382 04:21:03 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:23:49.382 04:21:03 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:49.382 04:21:03 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:23:49.382 04:21:03 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:23:49.382 04:21:03 -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:23:49.382 04:21:03 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:23:49.382 04:21:03 -- common/autotest_common.sh@712 -- # xtrace_disable 00:23:49.382 04:21:03 -- common/autotest_common.sh@10 -- # set +x 00:23:49.382 04:21:03 -- nvmf/common.sh@469 -- # nvmfpid=4079959 00:23:49.382 04:21:03 -- nvmf/common.sh@470 -- # waitforlisten 4079959 00:23:49.382 04:21:03 -- common/autotest_common.sh@819 -- # '[' -z 4079959 ']' 00:23:49.382 04:21:03 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:49.382 04:21:03 -- common/autotest_common.sh@824 -- # local max_retries=100 00:23:49.382 04:21:03 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:49.382 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:49.382 04:21:03 -- common/autotest_common.sh@828 -- # xtrace_disable 00:23:49.382 04:21:03 -- common/autotest_common.sh@10 -- # set +x 00:23:49.382 04:21:03 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:23:49.641 [2024-05-14 04:21:04.039182] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:23:49.641 [2024-05-14 04:21:04.039329] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:49.641 EAL: No free 2048 kB hugepages reported on node 1 00:23:49.641 [2024-05-14 04:21:04.189576] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:49.900 [2024-05-14 04:21:04.298916] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:23:49.900 [2024-05-14 04:21:04.299113] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:49.900 [2024-05-14 04:21:04.299129] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:49.900 [2024-05-14 04:21:04.299140] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:49.900 [2024-05-14 04:21:04.299238] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:23:49.900 [2024-05-14 04:21:04.299270] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:23:49.900 [2024-05-14 04:21:04.299377] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:49.900 [2024-05-14 04:21:04.299391] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:23:50.465 04:21:04 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:23:50.465 04:21:04 -- common/autotest_common.sh@852 -- # return 0 00:23:50.465 04:21:04 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:23:50.465 04:21:04 -- common/autotest_common.sh@718 -- # xtrace_disable 00:23:50.465 04:21:04 -- common/autotest_common.sh@10 -- # set +x 00:23:50.465 04:21:04 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:50.465 04:21:04 -- target/fio.sh@19 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:23:50.465 [2024-05-14 04:21:04.904798] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:50.465 04:21:04 -- target/fio.sh@21 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:23:50.723 04:21:05 -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:23:50.723 04:21:05 -- target/fio.sh@22 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:23:50.723 04:21:05 -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:23:50.723 04:21:05 -- target/fio.sh@24 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:23:50.982 04:21:05 -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:23:50.982 04:21:05 -- target/fio.sh@25 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:23:51.240 04:21:05 -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:23:51.240 04:21:05 -- target/fio.sh@26 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:23:51.240 04:21:05 -- target/fio.sh@29 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:23:51.498 04:21:05 -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:23:51.498 04:21:05 -- target/fio.sh@30 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:23:51.756 04:21:06 -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:23:51.756 04:21:06 -- target/fio.sh@31 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:23:51.756 04:21:06 -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:23:51.756 04:21:06 -- target/fio.sh@32 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:23:52.015 04:21:06 -- target/fio.sh@34 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:23:52.015 04:21:06 -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:23:52.015 04:21:06 -- target/fio.sh@36 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:23:52.274 04:21:06 -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:23:52.274 04:21:06 -- target/fio.sh@36 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:23:52.532 04:21:06 -- target/fio.sh@38 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:52.532 [2024-05-14 04:21:06.988798] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:52.532 04:21:07 -- target/fio.sh@41 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:23:52.790 04:21:07 -- target/fio.sh@44 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:23:52.790 04:21:07 -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80ef6226-405e-ee11-906e-a4bf01973fda --hostid=80ef6226-405e-ee11-906e-a4bf01973fda -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:23:54.693 04:21:08 -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:23:54.693 04:21:08 -- common/autotest_common.sh@1177 -- # local i=0 00:23:54.693 04:21:08 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:23:54.693 04:21:08 -- common/autotest_common.sh@1179 -- # [[ -n 4 ]] 00:23:54.693 04:21:08 -- common/autotest_common.sh@1180 -- # nvme_device_counter=4 00:23:54.693 04:21:08 -- common/autotest_common.sh@1184 -- # sleep 2 00:23:56.592 04:21:10 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:23:56.592 04:21:10 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:23:56.592 04:21:10 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:23:56.592 04:21:10 -- common/autotest_common.sh@1186 -- # nvme_devices=4 00:23:56.592 04:21:10 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:23:56.593 04:21:10 -- common/autotest_common.sh@1187 -- # return 0 00:23:56.593 04:21:10 -- target/fio.sh@50 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:23:56.593 [global] 00:23:56.593 thread=1 00:23:56.593 invalidate=1 00:23:56.593 rw=write 00:23:56.593 time_based=1 00:23:56.593 runtime=1 00:23:56.593 ioengine=libaio 00:23:56.593 direct=1 00:23:56.593 bs=4096 00:23:56.593 iodepth=1 00:23:56.593 norandommap=0 00:23:56.593 numjobs=1 00:23:56.593 00:23:56.593 verify_dump=1 00:23:56.593 verify_backlog=512 00:23:56.593 verify_state_save=0 00:23:56.593 do_verify=1 00:23:56.593 verify=crc32c-intel 00:23:56.593 [job0] 00:23:56.593 filename=/dev/nvme0n1 00:23:56.593 [job1] 00:23:56.593 filename=/dev/nvme0n2 00:23:56.593 [job2] 00:23:56.593 filename=/dev/nvme0n3 00:23:56.593 [job3] 00:23:56.593 filename=/dev/nvme0n4 00:23:56.593 Could not set queue depth (nvme0n1) 00:23:56.593 Could not set queue depth (nvme0n2) 00:23:56.593 Could not set queue depth (nvme0n3) 00:23:56.593 Could not set queue depth (nvme0n4) 00:23:56.851 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:23:56.851 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:23:56.851 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:23:56.851 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:23:56.851 fio-3.35 00:23:56.851 Starting 4 threads 00:23:58.228 00:23:58.228 job0: (groupid=0, jobs=1): err= 0: pid=4082115: Tue May 14 04:21:12 2024 00:23:58.228 read: IOPS=880, BW=3520KiB/s (3605kB/s)(3524KiB/1001msec) 00:23:58.228 slat (nsec): min=3535, max=51100, avg=15377.18, stdev=10384.27 00:23:58.228 clat (usec): min=193, max=41515, avg=861.01, stdev=4537.46 00:23:58.228 lat (usec): min=198, max=41537, avg=876.39, stdev=4537.29 00:23:58.228 clat percentiles (usec): 00:23:58.228 | 1.00th=[ 212], 5.00th=[ 227], 10.00th=[ 235], 20.00th=[ 253], 00:23:58.228 | 30.00th=[ 269], 40.00th=[ 285], 50.00th=[ 310], 60.00th=[ 338], 00:23:58.228 | 70.00th=[ 367], 80.00th=[ 453], 90.00th=[ 578], 95.00th=[ 652], 00:23:58.228 | 99.00th=[40633], 99.50th=[41157], 99.90th=[41681], 99.95th=[41681], 00:23:58.228 | 99.99th=[41681] 00:23:58.228 write: IOPS=1022, BW=4092KiB/s (4190kB/s)(4096KiB/1001msec); 0 zone resets 00:23:58.228 slat (nsec): min=5040, max=53825, avg=9788.57, stdev=6583.04 00:23:58.228 clat (usec): min=122, max=1021, avg=206.54, stdev=69.51 00:23:58.228 lat (usec): min=129, max=1030, avg=216.33, stdev=72.72 00:23:58.228 clat percentiles (usec): 00:23:58.228 | 1.00th=[ 127], 5.00th=[ 133], 10.00th=[ 137], 20.00th=[ 149], 00:23:58.228 | 30.00th=[ 172], 40.00th=[ 186], 50.00th=[ 202], 60.00th=[ 221], 00:23:58.228 | 70.00th=[ 235], 80.00th=[ 245], 90.00th=[ 265], 95.00th=[ 293], 00:23:58.228 | 99.00th=[ 392], 99.50th=[ 420], 99.90th=[ 988], 99.95th=[ 1020], 00:23:58.228 | 99.99th=[ 1020] 00:23:58.228 bw ( KiB/s): min= 4096, max= 4096, per=20.26%, avg=4096.00, stdev= 0.00, samples=1 00:23:58.228 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:23:58.228 lat (usec) : 250=54.44%, 500=37.43%, 750=6.72%, 1000=0.73% 00:23:58.228 lat (msec) : 2=0.10%, 50=0.58% 00:23:58.228 cpu : usr=2.20%, sys=2.60%, ctx=1905, majf=0, minf=1 00:23:58.228 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:23:58.228 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:58.228 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:58.228 issued rwts: total=881,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:58.228 latency : target=0, window=0, percentile=100.00%, depth=1 00:23:58.228 job1: (groupid=0, jobs=1): err= 0: pid=4082118: Tue May 14 04:21:12 2024 00:23:58.228 read: IOPS=21, BW=86.9KiB/s (89.0kB/s)(88.0KiB/1013msec) 00:23:58.228 slat (nsec): min=4018, max=7591, avg=5154.27, stdev=894.97 00:23:58.228 clat (usec): min=40968, max=42122, avg=41948.50, stdev=229.03 00:23:58.228 lat (usec): min=40976, max=42128, avg=41953.65, stdev=228.48 00:23:58.228 clat percentiles (usec): 00:23:58.228 | 1.00th=[41157], 5.00th=[41681], 10.00th=[41681], 20.00th=[41681], 00:23:58.228 | 30.00th=[42206], 40.00th=[42206], 50.00th=[42206], 60.00th=[42206], 00:23:58.228 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:23:58.228 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:23:58.228 | 99.99th=[42206] 00:23:58.228 write: IOPS=505, BW=2022KiB/s (2070kB/s)(2048KiB/1013msec); 0 zone resets 00:23:58.228 slat (nsec): min=4470, max=54219, avg=6454.67, stdev=2834.84 00:23:58.228 clat (usec): min=121, max=611, avg=168.03, stdev=34.47 00:23:58.228 lat (usec): min=127, max=665, avg=174.49, stdev=36.35 00:23:58.228 clat percentiles (usec): 00:23:58.228 | 1.00th=[ 129], 5.00th=[ 135], 10.00th=[ 141], 20.00th=[ 147], 00:23:58.228 | 30.00th=[ 153], 40.00th=[ 157], 50.00th=[ 163], 60.00th=[ 169], 00:23:58.228 | 70.00th=[ 176], 80.00th=[ 184], 90.00th=[ 196], 95.00th=[ 210], 00:23:58.228 | 99.00th=[ 269], 99.50th=[ 334], 99.90th=[ 611], 99.95th=[ 611], 00:23:58.228 | 99.99th=[ 611] 00:23:58.228 bw ( KiB/s): min= 4096, max= 4096, per=20.26%, avg=4096.00, stdev= 0.00, samples=1 00:23:58.228 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:23:58.228 lat (usec) : 250=94.76%, 500=0.94%, 750=0.19% 00:23:58.228 lat (msec) : 50=4.12% 00:23:58.228 cpu : usr=0.10%, sys=0.40%, ctx=534, majf=0, minf=1 00:23:58.228 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:23:58.228 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:58.228 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:58.228 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:58.228 latency : target=0, window=0, percentile=100.00%, depth=1 00:23:58.228 job2: (groupid=0, jobs=1): err= 0: pid=4082119: Tue May 14 04:21:12 2024 00:23:58.228 read: IOPS=1569, BW=6278KiB/s (6428kB/s)(6284KiB/1001msec) 00:23:58.228 slat (nsec): min=3219, max=56916, avg=11876.59, stdev=9633.10 00:23:58.228 clat (usec): min=255, max=583, avg=350.28, stdev=48.99 00:23:58.228 lat (usec): min=260, max=613, avg=362.16, stdev=54.63 00:23:58.228 clat percentiles (usec): 00:23:58.228 | 1.00th=[ 269], 5.00th=[ 289], 10.00th=[ 297], 20.00th=[ 314], 00:23:58.228 | 30.00th=[ 322], 40.00th=[ 330], 50.00th=[ 334], 60.00th=[ 347], 00:23:58.229 | 70.00th=[ 371], 80.00th=[ 400], 90.00th=[ 424], 95.00th=[ 441], 00:23:58.229 | 99.00th=[ 478], 99.50th=[ 494], 99.90th=[ 506], 99.95th=[ 586], 00:23:58.229 | 99.99th=[ 586] 00:23:58.229 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:23:58.229 slat (nsec): min=4954, max=46606, avg=7930.72, stdev=3008.26 00:23:58.229 clat (usec): min=128, max=458, avg=197.99, stdev=35.98 00:23:58.229 lat (usec): min=133, max=505, avg=205.93, stdev=37.11 00:23:58.229 clat percentiles (usec): 00:23:58.229 | 1.00th=[ 141], 5.00th=[ 149], 10.00th=[ 159], 20.00th=[ 174], 00:23:58.229 | 30.00th=[ 182], 40.00th=[ 186], 50.00th=[ 190], 60.00th=[ 196], 00:23:58.229 | 70.00th=[ 204], 80.00th=[ 225], 90.00th=[ 249], 95.00th=[ 277], 00:23:58.229 | 99.00th=[ 297], 99.50th=[ 302], 99.90th=[ 347], 99.95th=[ 388], 00:23:58.229 | 99.99th=[ 457] 00:23:58.229 bw ( KiB/s): min= 8192, max= 8192, per=40.52%, avg=8192.00, stdev= 0.00, samples=1 00:23:58.229 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:23:58.229 lat (usec) : 250=50.98%, 500=48.96%, 750=0.06% 00:23:58.229 cpu : usr=2.10%, sys=5.10%, ctx=3619, majf=0, minf=1 00:23:58.229 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:23:58.229 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:58.229 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:58.229 issued rwts: total=1571,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:58.229 latency : target=0, window=0, percentile=100.00%, depth=1 00:23:58.229 job3: (groupid=0, jobs=1): err= 0: pid=4082120: Tue May 14 04:21:12 2024 00:23:58.229 read: IOPS=1116, BW=4468KiB/s (4575kB/s)(4472KiB/1001msec) 00:23:58.229 slat (nsec): min=3212, max=41466, avg=7833.19, stdev=6223.88 00:23:58.229 clat (usec): min=169, max=41378, avg=640.44, stdev=3836.77 00:23:58.229 lat (usec): min=174, max=41419, avg=648.27, stdev=3837.33 00:23:58.229 clat percentiles (usec): 00:23:58.229 | 1.00th=[ 196], 5.00th=[ 217], 10.00th=[ 225], 20.00th=[ 231], 00:23:58.229 | 30.00th=[ 239], 40.00th=[ 245], 50.00th=[ 253], 60.00th=[ 269], 00:23:58.229 | 70.00th=[ 281], 80.00th=[ 318], 90.00th=[ 383], 95.00th=[ 416], 00:23:58.229 | 99.00th=[ 545], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:23:58.229 | 99.99th=[41157] 00:23:58.229 write: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec); 0 zone resets 00:23:58.229 slat (nsec): min=4614, max=79660, avg=7345.42, stdev=2682.32 00:23:58.229 clat (usec): min=106, max=657, avg=168.13, stdev=45.55 00:23:58.229 lat (usec): min=113, max=737, avg=175.47, stdev=46.83 00:23:58.229 clat percentiles (usec): 00:23:58.229 | 1.00th=[ 122], 5.00th=[ 131], 10.00th=[ 133], 20.00th=[ 137], 00:23:58.229 | 30.00th=[ 139], 40.00th=[ 143], 50.00th=[ 151], 60.00th=[ 161], 00:23:58.229 | 70.00th=[ 176], 80.00th=[ 196], 90.00th=[ 239], 95.00th=[ 258], 00:23:58.229 | 99.00th=[ 330], 99.50th=[ 359], 99.90th=[ 469], 99.95th=[ 660], 00:23:58.229 | 99.99th=[ 660] 00:23:58.229 bw ( KiB/s): min= 8192, max= 8192, per=40.52%, avg=8192.00, stdev= 0.00, samples=1 00:23:58.229 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:23:58.229 lat (usec) : 250=74.11%, 500=25.17%, 750=0.34% 00:23:58.229 lat (msec) : 50=0.38% 00:23:58.229 cpu : usr=1.30%, sys=1.60%, ctx=2658, majf=0, minf=1 00:23:58.229 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:23:58.229 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:58.229 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:58.229 issued rwts: total=1118,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:58.229 latency : target=0, window=0, percentile=100.00%, depth=1 00:23:58.229 00:23:58.229 Run status group 0 (all jobs): 00:23:58.229 READ: bw=13.9MiB/s (14.5MB/s), 86.9KiB/s-6278KiB/s (89.0kB/s-6428kB/s), io=14.0MiB (14.7MB), run=1001-1013msec 00:23:58.229 WRITE: bw=19.7MiB/s (20.7MB/s), 2022KiB/s-8184KiB/s (2070kB/s-8380kB/s), io=20.0MiB (21.0MB), run=1001-1013msec 00:23:58.229 00:23:58.229 Disk stats (read/write): 00:23:58.229 nvme0n1: ios=788/1024, merge=0/0, ticks=554/203, in_queue=757, util=86.07% 00:23:58.229 nvme0n2: ios=67/512, merge=0/0, ticks=789/85, in_queue=874, util=89.98% 00:23:58.229 nvme0n3: ios=1516/1536, merge=0/0, ticks=530/277, in_queue=807, util=93.54% 00:23:58.229 nvme0n4: ios=1044/1024, merge=0/0, ticks=1525/161, in_queue=1686, util=94.22% 00:23:58.229 04:21:12 -- target/fio.sh@51 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:23:58.229 [global] 00:23:58.229 thread=1 00:23:58.229 invalidate=1 00:23:58.229 rw=randwrite 00:23:58.229 time_based=1 00:23:58.229 runtime=1 00:23:58.229 ioengine=libaio 00:23:58.229 direct=1 00:23:58.229 bs=4096 00:23:58.229 iodepth=1 00:23:58.229 norandommap=0 00:23:58.229 numjobs=1 00:23:58.229 00:23:58.229 verify_dump=1 00:23:58.229 verify_backlog=512 00:23:58.229 verify_state_save=0 00:23:58.229 do_verify=1 00:23:58.229 verify=crc32c-intel 00:23:58.229 [job0] 00:23:58.229 filename=/dev/nvme0n1 00:23:58.229 [job1] 00:23:58.229 filename=/dev/nvme0n2 00:23:58.229 [job2] 00:23:58.229 filename=/dev/nvme0n3 00:23:58.229 [job3] 00:23:58.229 filename=/dev/nvme0n4 00:23:58.229 Could not set queue depth (nvme0n1) 00:23:58.229 Could not set queue depth (nvme0n2) 00:23:58.229 Could not set queue depth (nvme0n3) 00:23:58.229 Could not set queue depth (nvme0n4) 00:23:58.488 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:23:58.488 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:23:58.488 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:23:58.488 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:23:58.488 fio-3.35 00:23:58.488 Starting 4 threads 00:23:59.864 00:23:59.864 job0: (groupid=0, jobs=1): err= 0: pid=4082587: Tue May 14 04:21:14 2024 00:23:59.864 read: IOPS=1703, BW=6812KiB/s (6975kB/s)(6812KiB/1000msec) 00:23:59.864 slat (nsec): min=4060, max=65755, avg=7074.52, stdev=4308.34 00:23:59.864 clat (usec): min=214, max=41528, avg=337.18, stdev=1000.58 00:23:59.864 lat (usec): min=221, max=41538, avg=344.25, stdev=1000.72 00:23:59.864 clat percentiles (usec): 00:23:59.864 | 1.00th=[ 233], 5.00th=[ 247], 10.00th=[ 260], 20.00th=[ 273], 00:23:59.864 | 30.00th=[ 285], 40.00th=[ 293], 50.00th=[ 302], 60.00th=[ 314], 00:23:59.864 | 70.00th=[ 326], 80.00th=[ 343], 90.00th=[ 363], 95.00th=[ 408], 00:23:59.864 | 99.00th=[ 553], 99.50th=[ 635], 99.90th=[ 938], 99.95th=[41681], 00:23:59.864 | 99.99th=[41681] 00:23:59.864 write: IOPS=2048, BW=8192KiB/s (8389kB/s)(8192KiB/1000msec); 0 zone resets 00:23:59.864 slat (nsec): min=4997, max=49001, avg=7467.79, stdev=1658.66 00:23:59.864 clat (usec): min=125, max=601, avg=190.29, stdev=31.09 00:23:59.864 lat (usec): min=131, max=650, avg=197.76, stdev=31.50 00:23:59.864 clat percentiles (usec): 00:23:59.864 | 1.00th=[ 135], 5.00th=[ 143], 10.00th=[ 153], 20.00th=[ 169], 00:23:59.864 | 30.00th=[ 178], 40.00th=[ 182], 50.00th=[ 188], 60.00th=[ 194], 00:23:59.864 | 70.00th=[ 200], 80.00th=[ 212], 90.00th=[ 229], 95.00th=[ 241], 00:23:59.864 | 99.00th=[ 281], 99.50th=[ 293], 99.90th=[ 330], 99.95th=[ 351], 00:23:59.864 | 99.99th=[ 603] 00:23:59.864 bw ( KiB/s): min= 8192, max= 8192, per=33.14%, avg=8192.00, stdev= 0.00, samples=1 00:23:59.864 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:23:59.864 lat (usec) : 250=55.75%, 500=43.24%, 750=0.91%, 1000=0.08% 00:23:59.864 lat (msec) : 50=0.03% 00:23:59.864 cpu : usr=2.10%, sys=3.40%, ctx=3754, majf=0, minf=1 00:23:59.864 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:23:59.864 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:59.864 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:59.864 issued rwts: total=1703,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:59.864 latency : target=0, window=0, percentile=100.00%, depth=1 00:23:59.864 job1: (groupid=0, jobs=1): err= 0: pid=4082588: Tue May 14 04:21:14 2024 00:23:59.864 read: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec) 00:23:59.864 slat (nsec): min=3955, max=49629, avg=11477.50, stdev=9810.76 00:23:59.864 clat (usec): min=241, max=1039, avg=363.92, stdev=62.04 00:23:59.864 lat (usec): min=248, max=1046, avg=375.40, stdev=65.32 00:23:59.864 clat percentiles (usec): 00:23:59.864 | 1.00th=[ 273], 5.00th=[ 289], 10.00th=[ 297], 20.00th=[ 314], 00:23:59.864 | 30.00th=[ 322], 40.00th=[ 334], 50.00th=[ 343], 60.00th=[ 363], 00:23:59.864 | 70.00th=[ 404], 80.00th=[ 424], 90.00th=[ 449], 95.00th=[ 469], 00:23:59.864 | 99.00th=[ 506], 99.50th=[ 545], 99.90th=[ 627], 99.95th=[ 1037], 00:23:59.864 | 99.99th=[ 1037] 00:23:59.864 write: IOPS=1770, BW=7081KiB/s (7251kB/s)(7088KiB/1001msec); 0 zone resets 00:23:59.864 slat (usec): min=5, max=21933, avg=25.04, stdev=520.84 00:23:59.864 clat (usec): min=131, max=603, avg=206.88, stdev=48.64 00:23:59.864 lat (usec): min=136, max=22374, avg=231.93, stdev=529.20 00:23:59.864 clat percentiles (usec): 00:23:59.864 | 1.00th=[ 143], 5.00th=[ 153], 10.00th=[ 159], 20.00th=[ 172], 00:23:59.864 | 30.00th=[ 180], 40.00th=[ 186], 50.00th=[ 194], 60.00th=[ 200], 00:23:59.864 | 70.00th=[ 210], 80.00th=[ 241], 90.00th=[ 285], 95.00th=[ 306], 00:23:59.864 | 99.00th=[ 343], 99.50th=[ 347], 99.90th=[ 519], 99.95th=[ 603], 00:23:59.864 | 99.99th=[ 603] 00:23:59.864 bw ( KiB/s): min= 8192, max= 8192, per=33.14%, avg=8192.00, stdev= 0.00, samples=1 00:23:59.864 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:23:59.864 lat (usec) : 250=43.38%, 500=56.05%, 750=0.54% 00:23:59.864 lat (msec) : 2=0.03% 00:23:59.864 cpu : usr=2.20%, sys=6.00%, ctx=3310, majf=0, minf=1 00:23:59.864 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:23:59.864 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:59.864 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:59.864 issued rwts: total=1536,1772,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:59.864 latency : target=0, window=0, percentile=100.00%, depth=1 00:23:59.864 job2: (groupid=0, jobs=1): err= 0: pid=4082589: Tue May 14 04:21:14 2024 00:23:59.864 read: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec) 00:23:59.864 slat (nsec): min=4011, max=57217, avg=12539.84, stdev=10409.52 00:23:59.864 clat (usec): min=262, max=689, avg=373.06, stdev=52.92 00:23:59.864 lat (usec): min=269, max=720, avg=385.60, stdev=58.33 00:23:59.864 clat percentiles (usec): 00:23:59.864 | 1.00th=[ 281], 5.00th=[ 302], 10.00th=[ 314], 20.00th=[ 326], 00:23:59.864 | 30.00th=[ 334], 40.00th=[ 351], 50.00th=[ 367], 60.00th=[ 383], 00:23:59.864 | 70.00th=[ 404], 80.00th=[ 424], 90.00th=[ 441], 95.00th=[ 461], 00:23:59.864 | 99.00th=[ 515], 99.50th=[ 529], 99.90th=[ 611], 99.95th=[ 693], 00:23:59.864 | 99.99th=[ 693] 00:23:59.864 write: IOPS=1938, BW=7752KiB/s (7938kB/s)(7760KiB/1001msec); 0 zone resets 00:23:59.864 slat (nsec): min=4387, max=47980, avg=7829.62, stdev=2658.78 00:23:59.864 clat (usec): min=127, max=381, avg=196.87, stdev=29.14 00:23:59.864 lat (usec): min=134, max=417, avg=204.70, stdev=29.89 00:23:59.864 clat percentiles (usec): 00:23:59.864 | 1.00th=[ 143], 5.00th=[ 153], 10.00th=[ 163], 20.00th=[ 176], 00:23:59.864 | 30.00th=[ 182], 40.00th=[ 188], 50.00th=[ 194], 60.00th=[ 200], 00:23:59.864 | 70.00th=[ 208], 80.00th=[ 221], 90.00th=[ 237], 95.00th=[ 247], 00:23:59.864 | 99.00th=[ 281], 99.50th=[ 289], 99.90th=[ 363], 99.95th=[ 383], 00:23:59.864 | 99.99th=[ 383] 00:23:59.864 bw ( KiB/s): min= 8192, max= 8192, per=33.14%, avg=8192.00, stdev= 0.00, samples=1 00:23:59.864 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:23:59.864 lat (usec) : 250=53.22%, 500=46.12%, 750=0.66% 00:23:59.864 cpu : usr=1.70%, sys=5.20%, ctx=3477, majf=0, minf=1 00:23:59.864 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:23:59.864 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:59.864 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:59.864 issued rwts: total=1536,1940,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:59.864 latency : target=0, window=0, percentile=100.00%, depth=1 00:23:59.864 job3: (groupid=0, jobs=1): err= 0: pid=4082590: Tue May 14 04:21:14 2024 00:23:59.864 read: IOPS=28, BW=114KiB/s (117kB/s)(116KiB/1015msec) 00:23:59.864 slat (nsec): min=7050, max=46189, avg=30625.86, stdev=14969.35 00:23:59.864 clat (usec): min=403, max=42031, avg=30260.37, stdev=17958.69 00:23:59.865 lat (usec): min=412, max=42074, avg=30291.00, stdev=17966.80 00:23:59.865 clat percentiles (usec): 00:23:59.865 | 1.00th=[ 404], 5.00th=[ 506], 10.00th=[ 693], 20.00th=[ 848], 00:23:59.865 | 30.00th=[40633], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:23:59.865 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41681], 95.00th=[42206], 00:23:59.865 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:23:59.865 | 99.99th=[42206] 00:23:59.865 write: IOPS=504, BW=2018KiB/s (2066kB/s)(2048KiB/1015msec); 0 zone resets 00:23:59.865 slat (nsec): min=5044, max=89855, avg=10723.38, stdev=8166.00 00:23:59.865 clat (usec): min=141, max=794, avg=252.84, stdev=61.05 00:23:59.865 lat (usec): min=148, max=839, avg=263.57, stdev=64.86 00:23:59.865 clat percentiles (usec): 00:23:59.865 | 1.00th=[ 151], 5.00th=[ 165], 10.00th=[ 188], 20.00th=[ 215], 00:23:59.865 | 30.00th=[ 233], 40.00th=[ 243], 50.00th=[ 247], 60.00th=[ 249], 00:23:59.865 | 70.00th=[ 260], 80.00th=[ 285], 90.00th=[ 318], 95.00th=[ 355], 00:23:59.865 | 99.00th=[ 437], 99.50th=[ 510], 99.90th=[ 791], 99.95th=[ 791], 00:23:59.865 | 99.99th=[ 791] 00:23:59.865 bw ( KiB/s): min= 4096, max= 4096, per=16.57%, avg=4096.00, stdev= 0.00, samples=1 00:23:59.865 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:23:59.865 lat (usec) : 250=59.33%, 500=34.94%, 750=0.74%, 1000=0.92% 00:23:59.865 lat (msec) : 10=0.18%, 50=3.88% 00:23:59.865 cpu : usr=0.30%, sys=0.79%, ctx=541, majf=0, minf=1 00:23:59.865 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:23:59.865 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:59.865 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:59.865 issued rwts: total=29,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:59.865 latency : target=0, window=0, percentile=100.00%, depth=1 00:23:59.865 00:23:59.865 Run status group 0 (all jobs): 00:23:59.865 READ: bw=18.5MiB/s (19.4MB/s), 114KiB/s-6812KiB/s (117kB/s-6975kB/s), io=18.8MiB (19.7MB), run=1000-1015msec 00:23:59.865 WRITE: bw=24.1MiB/s (25.3MB/s), 2018KiB/s-8192KiB/s (2066kB/s-8389kB/s), io=24.5MiB (25.7MB), run=1000-1015msec 00:23:59.865 00:23:59.865 Disk stats (read/write): 00:23:59.865 nvme0n1: ios=1585/1565, merge=0/0, ticks=737/279, in_queue=1016, util=84.27% 00:23:59.865 nvme0n2: ios=1272/1536, merge=0/0, ticks=631/288, in_queue=919, util=89.60% 00:23:59.865 nvme0n3: ios=1388/1536, merge=0/0, ticks=1029/291, in_queue=1320, util=92.32% 00:23:59.865 nvme0n4: ios=81/512, merge=0/0, ticks=745/124, in_queue=869, util=95.63% 00:23:59.865 04:21:14 -- target/fio.sh@52 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:23:59.865 [global] 00:23:59.865 thread=1 00:23:59.865 invalidate=1 00:23:59.865 rw=write 00:23:59.865 time_based=1 00:23:59.865 runtime=1 00:23:59.865 ioengine=libaio 00:23:59.865 direct=1 00:23:59.865 bs=4096 00:23:59.865 iodepth=128 00:23:59.865 norandommap=0 00:23:59.865 numjobs=1 00:23:59.865 00:23:59.865 verify_dump=1 00:23:59.865 verify_backlog=512 00:23:59.865 verify_state_save=0 00:23:59.865 do_verify=1 00:23:59.865 verify=crc32c-intel 00:23:59.865 [job0] 00:23:59.865 filename=/dev/nvme0n1 00:23:59.865 [job1] 00:23:59.865 filename=/dev/nvme0n2 00:23:59.865 [job2] 00:23:59.865 filename=/dev/nvme0n3 00:23:59.865 [job3] 00:23:59.865 filename=/dev/nvme0n4 00:23:59.865 Could not set queue depth (nvme0n1) 00:23:59.865 Could not set queue depth (nvme0n2) 00:23:59.865 Could not set queue depth (nvme0n3) 00:23:59.865 Could not set queue depth (nvme0n4) 00:24:00.123 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:24:00.123 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:24:00.123 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:24:00.123 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:24:00.123 fio-3.35 00:24:00.123 Starting 4 threads 00:24:01.500 00:24:01.500 job0: (groupid=0, jobs=1): err= 0: pid=4083067: Tue May 14 04:21:15 2024 00:24:01.500 read: IOPS=3552, BW=13.9MiB/s (14.5MB/s)(14.0MiB/1009msec) 00:24:01.500 slat (nsec): min=905, max=61826k, avg=138258.14, stdev=1465996.81 00:24:01.500 clat (usec): min=2976, max=89501, avg=18171.65, stdev=16665.68 00:24:01.500 lat (usec): min=2982, max=89536, avg=18309.91, stdev=16768.67 00:24:01.500 clat percentiles (usec): 00:24:01.500 | 1.00th=[ 3392], 5.00th=[ 4883], 10.00th=[ 5997], 20.00th=[ 8160], 00:24:01.500 | 30.00th=[ 9110], 40.00th=[10159], 50.00th=[12125], 60.00th=[13566], 00:24:01.500 | 70.00th=[18220], 80.00th=[23462], 90.00th=[41157], 95.00th=[64226], 00:24:01.500 | 99.00th=[81265], 99.50th=[81265], 99.90th=[81265], 99.95th=[81265], 00:24:01.500 | 99.99th=[89654] 00:24:01.500 write: IOPS=3837, BW=15.0MiB/s (15.7MB/s)(15.1MiB/1009msec); 0 zone resets 00:24:01.500 slat (nsec): min=1990, max=11611k, avg=127900.78, stdev=768928.11 00:24:01.500 clat (usec): min=1131, max=87480, avg=16233.00, stdev=12544.50 00:24:01.500 lat (usec): min=1142, max=87491, avg=16360.90, stdev=12621.62 00:24:01.500 clat percentiles (usec): 00:24:01.500 | 1.00th=[ 3097], 5.00th=[ 5669], 10.00th=[ 6587], 20.00th=[ 8455], 00:24:01.500 | 30.00th=[10159], 40.00th=[10945], 50.00th=[12125], 60.00th=[13566], 00:24:01.500 | 70.00th=[16450], 80.00th=[22414], 90.00th=[31065], 95.00th=[37487], 00:24:01.500 | 99.00th=[72877], 99.50th=[77071], 99.90th=[87557], 99.95th=[87557], 00:24:01.500 | 99.99th=[87557] 00:24:01.500 bw ( KiB/s): min=13568, max=16384, per=21.00%, avg=14976.00, stdev=1991.21, samples=2 00:24:01.500 iops : min= 3392, max= 4096, avg=3744.00, stdev=497.80, samples=2 00:24:01.500 lat (msec) : 2=0.15%, 4=1.49%, 10=31.75%, 20=41.72%, 50=18.95% 00:24:01.500 lat (msec) : 100=5.94% 00:24:01.500 cpu : usr=1.79%, sys=4.27%, ctx=291, majf=0, minf=1 00:24:01.500 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:24:01.500 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:01.500 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:24:01.500 issued rwts: total=3584,3872,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:01.500 latency : target=0, window=0, percentile=100.00%, depth=128 00:24:01.500 job1: (groupid=0, jobs=1): err= 0: pid=4083068: Tue May 14 04:21:15 2024 00:24:01.500 read: IOPS=4055, BW=15.8MiB/s (16.6MB/s)(16.0MiB/1010msec) 00:24:01.500 slat (nsec): min=895, max=23202k, avg=88336.29, stdev=678079.05 00:24:01.500 clat (usec): min=1678, max=61862, avg=13189.97, stdev=7496.50 00:24:01.500 lat (usec): min=1681, max=67755, avg=13278.31, stdev=7549.68 00:24:01.500 clat percentiles (usec): 00:24:01.500 | 1.00th=[ 3163], 5.00th=[ 6259], 10.00th=[ 8356], 20.00th=[ 9110], 00:24:01.500 | 30.00th=[ 9372], 40.00th=[ 9896], 50.00th=[10552], 60.00th=[11863], 00:24:01.500 | 70.00th=[13435], 80.00th=[15664], 90.00th=[21890], 95.00th=[28181], 00:24:01.501 | 99.00th=[42730], 99.50th=[49021], 99.90th=[61604], 99.95th=[61604], 00:24:01.501 | 99.99th=[61604] 00:24:01.501 write: IOPS=4339, BW=17.0MiB/s (17.8MB/s)(17.1MiB/1010msec); 0 zone resets 00:24:01.501 slat (usec): min=2, max=15600, avg=126.23, stdev=813.43 00:24:01.501 clat (usec): min=785, max=93319, avg=16892.77, stdev=16481.75 00:24:01.501 lat (usec): min=796, max=93324, avg=17018.99, stdev=16592.77 00:24:01.501 clat percentiles (usec): 00:24:01.501 | 1.00th=[ 4080], 5.00th=[ 5473], 10.00th=[ 6587], 20.00th=[ 8848], 00:24:01.501 | 30.00th=[ 9241], 40.00th=[ 9634], 50.00th=[10683], 60.00th=[13566], 00:24:01.501 | 70.00th=[15926], 80.00th=[18220], 90.00th=[33162], 95.00th=[51119], 00:24:01.501 | 99.00th=[88605], 99.50th=[89654], 99.90th=[92799], 99.95th=[92799], 00:24:01.501 | 99.99th=[92799] 00:24:01.501 bw ( KiB/s): min=14088, max=19952, per=23.86%, avg=17020.00, stdev=4146.47, samples=2 00:24:01.501 iops : min= 3522, max= 4988, avg=4255.00, stdev=1036.62, samples=2 00:24:01.501 lat (usec) : 1000=0.02% 00:24:01.501 lat (msec) : 2=0.19%, 4=1.06%, 10=40.74%, 20=42.53%, 50=12.65% 00:24:01.501 lat (msec) : 100=2.81% 00:24:01.501 cpu : usr=2.58%, sys=4.36%, ctx=354, majf=0, minf=1 00:24:01.501 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:24:01.501 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:01.501 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:24:01.501 issued rwts: total=4096,4383,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:01.501 latency : target=0, window=0, percentile=100.00%, depth=128 00:24:01.501 job2: (groupid=0, jobs=1): err= 0: pid=4083069: Tue May 14 04:21:15 2024 00:24:01.501 read: IOPS=4083, BW=16.0MiB/s (16.7MB/s)(16.0MiB/1003msec) 00:24:01.501 slat (nsec): min=844, max=14118k, avg=107724.37, stdev=719223.01 00:24:01.501 clat (usec): min=5982, max=34703, avg=13601.70, stdev=4851.44 00:24:01.501 lat (usec): min=5984, max=34761, avg=13709.42, stdev=4912.93 00:24:01.501 clat percentiles (usec): 00:24:01.501 | 1.00th=[ 7046], 5.00th=[ 8356], 10.00th=[ 8586], 20.00th=[ 9503], 00:24:01.501 | 30.00th=[10290], 40.00th=[11469], 50.00th=[12780], 60.00th=[13829], 00:24:01.501 | 70.00th=[14746], 80.00th=[17171], 90.00th=[20579], 95.00th=[24249], 00:24:01.501 | 99.00th=[28443], 99.50th=[28705], 99.90th=[32375], 99.95th=[33817], 00:24:01.501 | 99.99th=[34866] 00:24:01.501 write: IOPS=4562, BW=17.8MiB/s (18.7MB/s)(17.9MiB/1003msec); 0 zone resets 00:24:01.501 slat (nsec): min=1523, max=27443k, avg=118746.92, stdev=832640.04 00:24:01.501 clat (usec): min=421, max=47499, avg=15580.52, stdev=7782.66 00:24:01.501 lat (usec): min=1180, max=47504, avg=15699.27, stdev=7837.05 00:24:01.501 clat percentiles (usec): 00:24:01.501 | 1.00th=[ 5866], 5.00th=[ 7308], 10.00th=[ 8356], 20.00th=[ 8717], 00:24:01.501 | 30.00th=[10290], 40.00th=[11731], 50.00th=[13566], 60.00th=[15008], 00:24:01.501 | 70.00th=[17695], 80.00th=[21365], 90.00th=[28181], 95.00th=[31327], 00:24:01.501 | 99.00th=[37487], 99.50th=[41681], 99.90th=[41681], 99.95th=[44827], 00:24:01.501 | 99.99th=[47449] 00:24:01.501 bw ( KiB/s): min=17704, max=17880, per=24.94%, avg=17792.00, stdev=124.45, samples=2 00:24:01.501 iops : min= 4426, max= 4470, avg=4448.00, stdev=31.11, samples=2 00:24:01.501 lat (usec) : 500=0.02% 00:24:01.501 lat (msec) : 4=0.48%, 10=24.94%, 20=56.64%, 50=17.91% 00:24:01.501 cpu : usr=2.00%, sys=3.39%, ctx=503, majf=0, minf=1 00:24:01.501 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:24:01.501 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:01.501 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:24:01.501 issued rwts: total=4096,4576,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:01.501 latency : target=0, window=0, percentile=100.00%, depth=128 00:24:01.501 job3: (groupid=0, jobs=1): err= 0: pid=4083070: Tue May 14 04:21:15 2024 00:24:01.501 read: IOPS=5079, BW=19.8MiB/s (20.8MB/s)(20.0MiB/1008msec) 00:24:01.501 slat (nsec): min=830, max=9861.6k, avg=91869.23, stdev=596925.27 00:24:01.501 clat (usec): min=4209, max=42044, avg=12008.60, stdev=3955.10 00:24:01.501 lat (usec): min=4216, max=42117, avg=12100.47, stdev=3991.81 00:24:01.501 clat percentiles (usec): 00:24:01.501 | 1.00th=[ 5735], 5.00th=[ 7308], 10.00th=[ 8586], 20.00th=[10028], 00:24:01.501 | 30.00th=[10683], 40.00th=[10814], 50.00th=[11076], 60.00th=[11731], 00:24:01.501 | 70.00th=[12387], 80.00th=[13829], 90.00th=[16188], 95.00th=[17695], 00:24:01.501 | 99.00th=[32375], 99.50th=[39060], 99.90th=[42206], 99.95th=[42206], 00:24:01.501 | 99.99th=[42206] 00:24:01.501 write: IOPS=5137, BW=20.1MiB/s (21.0MB/s)(20.2MiB/1008msec); 0 zone resets 00:24:01.501 slat (nsec): min=1760, max=16372k, avg=91581.75, stdev=606131.94 00:24:01.501 clat (usec): min=360, max=78360, avg=12800.96, stdev=6952.09 00:24:01.501 lat (usec): min=421, max=78366, avg=12892.55, stdev=6978.41 00:24:01.501 clat percentiles (usec): 00:24:01.501 | 1.00th=[ 2737], 5.00th=[ 5473], 10.00th=[ 6652], 20.00th=[ 8029], 00:24:01.501 | 30.00th=[ 8979], 40.00th=[ 9765], 50.00th=[11207], 60.00th=[12256], 00:24:01.501 | 70.00th=[16057], 80.00th=[17695], 90.00th=[19792], 95.00th=[22152], 00:24:01.501 | 99.00th=[33817], 99.50th=[63177], 99.90th=[76022], 99.95th=[78119], 00:24:01.501 | 99.99th=[78119] 00:24:01.501 bw ( KiB/s): min=20168, max=20792, per=28.71%, avg=20480.00, stdev=441.23, samples=2 00:24:01.501 iops : min= 5042, max= 5198, avg=5120.00, stdev=110.31, samples=2 00:24:01.501 lat (usec) : 500=0.01%, 750=0.03% 00:24:01.501 lat (msec) : 2=0.13%, 4=1.13%, 10=29.05%, 20=64.02%, 50=5.34% 00:24:01.501 lat (msec) : 100=0.30% 00:24:01.501 cpu : usr=2.38%, sys=4.17%, ctx=489, majf=0, minf=1 00:24:01.501 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:24:01.501 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:01.501 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:24:01.501 issued rwts: total=5120,5179,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:01.501 latency : target=0, window=0, percentile=100.00%, depth=128 00:24:01.501 00:24:01.501 Run status group 0 (all jobs): 00:24:01.501 READ: bw=65.3MiB/s (68.5MB/s), 13.9MiB/s-19.8MiB/s (14.5MB/s-20.8MB/s), io=66.0MiB (69.2MB), run=1003-1010msec 00:24:01.501 WRITE: bw=69.7MiB/s (73.0MB/s), 15.0MiB/s-20.1MiB/s (15.7MB/s-21.0MB/s), io=70.4MiB (73.8MB), run=1003-1010msec 00:24:01.501 00:24:01.501 Disk stats (read/write): 00:24:01.501 nvme0n1: ios=3200/3584, merge=0/0, ticks=25992/32336, in_queue=58328, util=93.99% 00:24:01.501 nvme0n2: ios=4088/4096, merge=0/0, ticks=40582/43719, in_queue=84301, util=97.25% 00:24:01.501 nvme0n3: ios=3091/3367, merge=0/0, ticks=23241/29443, in_queue=52684, util=96.94% 00:24:01.501 nvme0n4: ios=3987/4096, merge=0/0, ticks=25314/31476, in_queue=56790, util=96.27% 00:24:01.501 04:21:15 -- target/fio.sh@53 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:24:01.501 [global] 00:24:01.501 thread=1 00:24:01.501 invalidate=1 00:24:01.501 rw=randwrite 00:24:01.501 time_based=1 00:24:01.501 runtime=1 00:24:01.501 ioengine=libaio 00:24:01.501 direct=1 00:24:01.501 bs=4096 00:24:01.501 iodepth=128 00:24:01.501 norandommap=0 00:24:01.501 numjobs=1 00:24:01.501 00:24:01.501 verify_dump=1 00:24:01.501 verify_backlog=512 00:24:01.501 verify_state_save=0 00:24:01.501 do_verify=1 00:24:01.501 verify=crc32c-intel 00:24:01.501 [job0] 00:24:01.501 filename=/dev/nvme0n1 00:24:01.501 [job1] 00:24:01.501 filename=/dev/nvme0n2 00:24:01.501 [job2] 00:24:01.501 filename=/dev/nvme0n3 00:24:01.501 [job3] 00:24:01.501 filename=/dev/nvme0n4 00:24:01.501 Could not set queue depth (nvme0n1) 00:24:01.501 Could not set queue depth (nvme0n2) 00:24:01.501 Could not set queue depth (nvme0n3) 00:24:01.501 Could not set queue depth (nvme0n4) 00:24:01.759 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:24:01.759 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:24:01.759 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:24:01.759 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:24:01.759 fio-3.35 00:24:01.759 Starting 4 threads 00:24:03.135 00:24:03.135 job0: (groupid=0, jobs=1): err= 0: pid=4083539: Tue May 14 04:21:17 2024 00:24:03.135 read: IOPS=4589, BW=17.9MiB/s (18.8MB/s)(18.0MiB/1004msec) 00:24:03.135 slat (nsec): min=863, max=15860k, avg=103567.94, stdev=707830.68 00:24:03.135 clat (usec): min=1777, max=64483, avg=13599.24, stdev=7591.40 00:24:03.135 lat (usec): min=1780, max=64486, avg=13702.81, stdev=7631.81 00:24:03.135 clat percentiles (usec): 00:24:03.135 | 1.00th=[ 3523], 5.00th=[ 6783], 10.00th=[ 7635], 20.00th=[ 8586], 00:24:03.135 | 30.00th=[ 9503], 40.00th=[10552], 50.00th=[11863], 60.00th=[12780], 00:24:03.135 | 70.00th=[14353], 80.00th=[16319], 90.00th=[22676], 95.00th=[30016], 00:24:03.135 | 99.00th=[37487], 99.50th=[61080], 99.90th=[64226], 99.95th=[64226], 00:24:03.135 | 99.99th=[64226] 00:24:03.135 write: IOPS=4590, BW=17.9MiB/s (18.8MB/s)(18.0MiB/1004msec); 0 zone resets 00:24:03.135 slat (nsec): min=1476, max=25794k, avg=95329.74, stdev=644579.69 00:24:03.135 clat (usec): min=202, max=42828, avg=14048.31, stdev=8418.61 00:24:03.135 lat (usec): min=214, max=47635, avg=14143.63, stdev=8467.58 00:24:03.135 clat percentiles (usec): 00:24:03.135 | 1.00th=[ 2638], 5.00th=[ 4817], 10.00th=[ 6194], 20.00th=[ 7111], 00:24:03.135 | 30.00th=[ 8029], 40.00th=[10028], 50.00th=[11469], 60.00th=[12911], 00:24:03.135 | 70.00th=[16450], 80.00th=[20579], 90.00th=[27132], 95.00th=[31851], 00:24:03.135 | 99.00th=[40109], 99.50th=[40633], 99.90th=[42730], 99.95th=[42730], 00:24:03.135 | 99.99th=[42730] 00:24:03.135 bw ( KiB/s): min=15912, max=20952, per=26.58%, avg=18432.00, stdev=3563.82, samples=2 00:24:03.135 iops : min= 3978, max= 5238, avg=4608.00, stdev=890.95, samples=2 00:24:03.135 lat (usec) : 250=0.01%, 500=0.04%, 1000=0.02% 00:24:03.135 lat (msec) : 2=0.37%, 4=1.38%, 10=34.51%, 20=46.58%, 50=16.71% 00:24:03.135 lat (msec) : 100=0.38% 00:24:03.135 cpu : usr=2.69%, sys=3.29%, ctx=520, majf=0, minf=1 00:24:03.135 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:24:03.135 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:03.135 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:24:03.135 issued rwts: total=4608,4609,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:03.135 latency : target=0, window=0, percentile=100.00%, depth=128 00:24:03.135 job1: (groupid=0, jobs=1): err= 0: pid=4083540: Tue May 14 04:21:17 2024 00:24:03.135 read: IOPS=3158, BW=12.3MiB/s (12.9MB/s)(12.5MiB/1010msec) 00:24:03.135 slat (nsec): min=876, max=24875k, avg=144326.45, stdev=1024534.98 00:24:03.135 clat (usec): min=6949, max=88586, avg=18740.08, stdev=13856.95 00:24:03.135 lat (usec): min=6953, max=88596, avg=18884.41, stdev=13932.87 00:24:03.135 clat percentiles (usec): 00:24:03.135 | 1.00th=[ 7111], 5.00th=[ 8455], 10.00th=[ 9503], 20.00th=[10028], 00:24:03.135 | 30.00th=[10552], 40.00th=[12649], 50.00th=[15008], 60.00th=[16450], 00:24:03.135 | 70.00th=[17957], 80.00th=[24511], 90.00th=[32113], 95.00th=[41157], 00:24:03.135 | 99.00th=[85459], 99.50th=[88605], 99.90th=[88605], 99.95th=[88605], 00:24:03.135 | 99.99th=[88605] 00:24:03.135 write: IOPS=3548, BW=13.9MiB/s (14.5MB/s)(14.0MiB/1010msec); 0 zone resets 00:24:03.135 slat (nsec): min=1451, max=15592k, avg=145621.46, stdev=846913.28 00:24:03.135 clat (usec): min=1190, max=54464, avg=19045.23, stdev=11242.27 00:24:03.135 lat (usec): min=1201, max=54473, avg=19190.85, stdev=11307.64 00:24:03.135 clat percentiles (usec): 00:24:03.135 | 1.00th=[ 3425], 5.00th=[ 6390], 10.00th=[ 7308], 20.00th=[ 9634], 00:24:03.135 | 30.00th=[10814], 40.00th=[12911], 50.00th=[16057], 60.00th=[19268], 00:24:03.135 | 70.00th=[23200], 80.00th=[28705], 90.00th=[36439], 95.00th=[40633], 00:24:03.135 | 99.00th=[50594], 99.50th=[52167], 99.90th=[54264], 99.95th=[54264], 00:24:03.135 | 99.99th=[54264] 00:24:03.135 bw ( KiB/s): min=12288, max=16312, per=20.62%, avg=14300.00, stdev=2845.40, samples=2 00:24:03.135 iops : min= 3072, max= 4078, avg=3575.00, stdev=711.35, samples=2 00:24:03.135 lat (msec) : 2=0.16%, 4=1.17%, 10=21.60%, 20=44.57%, 50=30.06% 00:24:03.135 lat (msec) : 100=2.45% 00:24:03.135 cpu : usr=1.68%, sys=3.37%, ctx=371, majf=0, minf=1 00:24:03.135 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:24:03.135 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:03.135 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:24:03.135 issued rwts: total=3190,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:03.135 latency : target=0, window=0, percentile=100.00%, depth=128 00:24:03.135 job2: (groupid=0, jobs=1): err= 0: pid=4083541: Tue May 14 04:21:17 2024 00:24:03.135 read: IOPS=4639, BW=18.1MiB/s (19.0MB/s)(18.3MiB/1010msec) 00:24:03.135 slat (nsec): min=859, max=14869k, avg=102041.26, stdev=721193.59 00:24:03.135 clat (usec): min=3327, max=32799, avg=12110.93, stdev=4271.06 00:24:03.135 lat (usec): min=3331, max=33738, avg=12212.97, stdev=4328.14 00:24:03.135 clat percentiles (usec): 00:24:03.135 | 1.00th=[ 5604], 5.00th=[ 7373], 10.00th=[ 8160], 20.00th=[ 9110], 00:24:03.135 | 30.00th=[10159], 40.00th=[10552], 50.00th=[10814], 60.00th=[11469], 00:24:03.135 | 70.00th=[13173], 80.00th=[15139], 90.00th=[16909], 95.00th=[20055], 00:24:03.135 | 99.00th=[28443], 99.50th=[30540], 99.90th=[32900], 99.95th=[32900], 00:24:03.135 | 99.99th=[32900] 00:24:03.135 write: IOPS=5069, BW=19.8MiB/s (20.8MB/s)(20.0MiB/1010msec); 0 zone resets 00:24:03.135 slat (nsec): min=1563, max=11158k, avg=98768.66, stdev=574444.02 00:24:03.135 clat (usec): min=1988, max=41370, avg=13874.43, stdev=5871.66 00:24:03.135 lat (usec): min=2009, max=41379, avg=13973.20, stdev=5904.99 00:24:03.135 clat percentiles (usec): 00:24:03.135 | 1.00th=[ 3326], 5.00th=[ 6063], 10.00th=[ 8225], 20.00th=[ 9110], 00:24:03.135 | 30.00th=[11207], 40.00th=[12256], 50.00th=[13173], 60.00th=[14222], 00:24:03.135 | 70.00th=[15533], 80.00th=[16909], 90.00th=[20317], 95.00th=[23987], 00:24:03.135 | 99.00th=[36439], 99.50th=[38536], 99.90th=[41157], 99.95th=[41157], 00:24:03.135 | 99.99th=[41157] 00:24:03.135 bw ( KiB/s): min=18456, max=22112, per=29.25%, avg=20284.00, stdev=2585.18, samples=2 00:24:03.135 iops : min= 4614, max= 5528, avg=5071.00, stdev=646.30, samples=2 00:24:03.135 lat (msec) : 2=0.01%, 4=0.94%, 10=25.77%, 20=64.91%, 50=8.37% 00:24:03.135 cpu : usr=2.38%, sys=3.37%, ctx=597, majf=0, minf=1 00:24:03.135 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:24:03.135 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:03.135 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:24:03.135 issued rwts: total=4686,5120,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:03.135 latency : target=0, window=0, percentile=100.00%, depth=128 00:24:03.135 job3: (groupid=0, jobs=1): err= 0: pid=4083542: Tue May 14 04:21:17 2024 00:24:03.135 read: IOPS=4051, BW=15.8MiB/s (16.6MB/s)(16.0MiB/1011msec) 00:24:03.135 slat (nsec): min=869, max=15463k, avg=100612.92, stdev=765241.80 00:24:03.135 clat (usec): min=1065, max=67375, avg=13602.10, stdev=7734.81 00:24:03.135 lat (usec): min=1073, max=67408, avg=13702.71, stdev=7822.75 00:24:03.135 clat percentiles (usec): 00:24:03.135 | 1.00th=[ 2507], 5.00th=[ 5604], 10.00th=[ 7373], 20.00th=[ 8979], 00:24:03.135 | 30.00th=[10290], 40.00th=[10814], 50.00th=[11600], 60.00th=[13173], 00:24:03.135 | 70.00th=[15008], 80.00th=[16057], 90.00th=[20579], 95.00th=[27919], 00:24:03.135 | 99.00th=[51643], 99.50th=[56361], 99.90th=[56361], 99.95th=[56361], 00:24:03.135 | 99.99th=[67634] 00:24:03.135 write: IOPS=4169, BW=16.3MiB/s (17.1MB/s)(16.5MiB/1011msec); 0 zone resets 00:24:03.135 slat (nsec): min=1507, max=16885k, avg=118259.40, stdev=767735.42 00:24:03.135 clat (usec): min=3385, max=59423, avg=17231.28, stdev=10157.62 00:24:03.135 lat (usec): min=4214, max=59427, avg=17349.54, stdev=10192.77 00:24:03.135 clat percentiles (usec): 00:24:03.135 | 1.00th=[ 4293], 5.00th=[ 6783], 10.00th=[ 7832], 20.00th=[ 9765], 00:24:03.135 | 30.00th=[11207], 40.00th=[13173], 50.00th=[14484], 60.00th=[16188], 00:24:03.135 | 70.00th=[18220], 80.00th=[21890], 90.00th=[31589], 95.00th=[40633], 00:24:03.135 | 99.00th=[53216], 99.50th=[55837], 99.90th=[59507], 99.95th=[59507], 00:24:03.135 | 99.99th=[59507] 00:24:03.135 bw ( KiB/s): min=13560, max=19208, per=23.63%, avg=16384.00, stdev=3993.74, samples=2 00:24:03.135 iops : min= 3390, max= 4802, avg=4096.00, stdev=998.43, samples=2 00:24:03.135 lat (msec) : 2=0.17%, 4=1.20%, 10=23.64%, 20=57.65%, 50=15.74% 00:24:03.135 lat (msec) : 100=1.60% 00:24:03.135 cpu : usr=1.88%, sys=4.16%, ctx=366, majf=0, minf=1 00:24:03.135 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:24:03.135 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:03.135 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:24:03.135 issued rwts: total=4096,4215,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:03.135 latency : target=0, window=0, percentile=100.00%, depth=128 00:24:03.135 00:24:03.135 Run status group 0 (all jobs): 00:24:03.135 READ: bw=64.1MiB/s (67.2MB/s), 12.3MiB/s-18.1MiB/s (12.9MB/s-19.0MB/s), io=64.8MiB (67.9MB), run=1004-1011msec 00:24:03.135 WRITE: bw=67.7MiB/s (71.0MB/s), 13.9MiB/s-19.8MiB/s (14.5MB/s-20.8MB/s), io=68.5MiB (71.8MB), run=1004-1011msec 00:24:03.135 00:24:03.135 Disk stats (read/write): 00:24:03.135 nvme0n1: ios=4129/4119, merge=0/0, ticks=29708/28684, in_queue=58392, util=97.80% 00:24:03.135 nvme0n2: ios=3120/3095, merge=0/0, ticks=23665/22276, in_queue=45941, util=91.08% 00:24:03.135 nvme0n3: ios=4083/4096, merge=0/0, ticks=39999/39199, in_queue=79198, util=94.21% 00:24:03.135 nvme0n4: ios=3136/3584, merge=0/0, ticks=31266/40845, in_queue=72111, util=98.13% 00:24:03.135 04:21:17 -- target/fio.sh@55 -- # sync 00:24:03.135 04:21:17 -- target/fio.sh@59 -- # fio_pid=4083848 00:24:03.135 04:21:17 -- target/fio.sh@61 -- # sleep 3 00:24:03.135 04:21:17 -- target/fio.sh@58 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:24:03.135 [global] 00:24:03.135 thread=1 00:24:03.135 invalidate=1 00:24:03.135 rw=read 00:24:03.135 time_based=1 00:24:03.135 runtime=10 00:24:03.136 ioengine=libaio 00:24:03.136 direct=1 00:24:03.136 bs=4096 00:24:03.136 iodepth=1 00:24:03.136 norandommap=1 00:24:03.136 numjobs=1 00:24:03.136 00:24:03.136 [job0] 00:24:03.136 filename=/dev/nvme0n1 00:24:03.136 [job1] 00:24:03.136 filename=/dev/nvme0n2 00:24:03.136 [job2] 00:24:03.136 filename=/dev/nvme0n3 00:24:03.136 [job3] 00:24:03.136 filename=/dev/nvme0n4 00:24:03.136 Could not set queue depth (nvme0n1) 00:24:03.136 Could not set queue depth (nvme0n2) 00:24:03.136 Could not set queue depth (nvme0n3) 00:24:03.136 Could not set queue depth (nvme0n4) 00:24:03.394 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:24:03.394 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:24:03.394 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:24:03.394 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:24:03.394 fio-3.35 00:24:03.394 Starting 4 threads 00:24:05.962 04:21:20 -- target/fio.sh@63 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:24:05.962 fio: io_u error on file /dev/nvme0n4: Remote I/O error: read offset=32825344, buflen=4096 00:24:05.962 fio: pid=4084024, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:24:05.962 04:21:20 -- target/fio.sh@64 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:24:06.220 04:21:20 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:24:06.220 04:21:20 -- target/fio.sh@66 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:24:06.220 fio: io_u error on file /dev/nvme0n3: Remote I/O error: read offset=22482944, buflen=4096 00:24:06.220 fio: pid=4084023, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:24:06.220 fio: io_u error on file /dev/nvme0n1: Remote I/O error: read offset=286720, buflen=4096 00:24:06.220 fio: pid=4084021, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:24:06.220 04:21:20 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:24:06.220 04:21:20 -- target/fio.sh@66 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:24:06.479 fio: io_u error on file /dev/nvme0n2: Remote I/O error: read offset=13127680, buflen=4096 00:24:06.479 fio: pid=4084022, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:24:06.479 04:21:20 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:24:06.479 04:21:20 -- target/fio.sh@66 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:24:06.479 00:24:06.479 job0: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=4084021: Tue May 14 04:21:20 2024 00:24:06.479 read: IOPS=24, BW=98.1KiB/s (100kB/s)(280KiB/2853msec) 00:24:06.479 slat (usec): min=9, max=14609, avg=325.89, stdev=1891.20 00:24:06.479 clat (usec): min=724, max=41439, avg=40403.66, stdev=4812.21 00:24:06.479 lat (usec): min=767, max=55990, avg=40639.00, stdev=5156.45 00:24:06.479 clat percentiles (usec): 00:24:06.479 | 1.00th=[ 725], 5.00th=[40633], 10.00th=[40633], 20.00th=[41157], 00:24:06.479 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:24:06.479 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:24:06.479 | 99.00th=[41681], 99.50th=[41681], 99.90th=[41681], 99.95th=[41681], 00:24:06.479 | 99.99th=[41681] 00:24:06.479 bw ( KiB/s): min= 96, max= 104, per=0.44%, avg=99.20, stdev= 4.38, samples=5 00:24:06.479 iops : min= 24, max= 26, avg=24.80, stdev= 1.10, samples=5 00:24:06.479 lat (usec) : 750=1.41% 00:24:06.479 lat (msec) : 50=97.18% 00:24:06.479 cpu : usr=0.11%, sys=0.00%, ctx=73, majf=0, minf=1 00:24:06.479 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:24:06.479 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:06.479 complete : 0=1.4%, 4=98.6%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:06.479 issued rwts: total=71,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:06.479 latency : target=0, window=0, percentile=100.00%, depth=1 00:24:06.479 job1: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=4084022: Tue May 14 04:21:20 2024 00:24:06.479 read: IOPS=1069, BW=4275KiB/s (4377kB/s)(12.5MiB/2999msec) 00:24:06.479 slat (usec): min=3, max=14429, avg=20.37, stdev=380.78 00:24:06.479 clat (usec): min=174, max=41853, avg=913.84, stdev=4999.34 00:24:06.479 lat (usec): min=180, max=41885, avg=934.21, stdev=5015.30 00:24:06.479 clat percentiles (usec): 00:24:06.479 | 1.00th=[ 217], 5.00th=[ 231], 10.00th=[ 239], 20.00th=[ 251], 00:24:06.479 | 30.00th=[ 269], 40.00th=[ 281], 50.00th=[ 285], 60.00th=[ 293], 00:24:06.479 | 70.00th=[ 302], 80.00th=[ 314], 90.00th=[ 343], 95.00th=[ 367], 00:24:06.479 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41681], 00:24:06.479 | 99.99th=[41681] 00:24:06.479 bw ( KiB/s): min= 96, max=12832, per=14.38%, avg=3219.20, stdev=5515.48, samples=5 00:24:06.479 iops : min= 24, max= 3208, avg=804.80, stdev=1378.87, samples=5 00:24:06.479 lat (usec) : 250=18.65%, 500=79.26%, 750=0.28%, 1000=0.12% 00:24:06.479 lat (msec) : 2=0.03%, 4=0.09%, 50=1.53% 00:24:06.479 cpu : usr=0.30%, sys=1.00%, ctx=3210, majf=0, minf=1 00:24:06.479 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:24:06.479 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:06.479 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:06.479 issued rwts: total=3206,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:06.479 latency : target=0, window=0, percentile=100.00%, depth=1 00:24:06.479 job2: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=4084023: Tue May 14 04:21:20 2024 00:24:06.479 read: IOPS=2002, BW=8010KiB/s (8202kB/s)(21.4MiB/2741msec) 00:24:06.479 slat (nsec): min=2332, max=40079, avg=6166.17, stdev=1993.87 00:24:06.479 clat (usec): min=191, max=42266, avg=492.10, stdev=3066.86 00:24:06.479 lat (usec): min=198, max=42298, avg=498.27, stdev=3068.03 00:24:06.479 clat percentiles (usec): 00:24:06.479 | 1.00th=[ 212], 5.00th=[ 225], 10.00th=[ 231], 20.00th=[ 237], 00:24:06.479 | 30.00th=[ 241], 40.00th=[ 247], 50.00th=[ 251], 60.00th=[ 258], 00:24:06.479 | 70.00th=[ 265], 80.00th=[ 273], 90.00th=[ 285], 95.00th=[ 302], 00:24:06.479 | 99.00th=[ 498], 99.50th=[41157], 99.90th=[41157], 99.95th=[42206], 00:24:06.479 | 99.99th=[42206] 00:24:06.479 bw ( KiB/s): min= 96, max=15368, per=39.20%, avg=8772.80, stdev=7174.36, samples=5 00:24:06.479 iops : min= 24, max= 3842, avg=2193.20, stdev=1793.59, samples=5 00:24:06.479 lat (usec) : 250=48.52%, 500=50.56%, 750=0.27%, 1000=0.02% 00:24:06.479 lat (msec) : 10=0.04%, 50=0.56% 00:24:06.479 cpu : usr=0.44%, sys=2.34%, ctx=5492, majf=0, minf=1 00:24:06.479 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:24:06.479 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:06.479 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:06.479 issued rwts: total=5490,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:06.479 latency : target=0, window=0, percentile=100.00%, depth=1 00:24:06.479 job3: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=4084024: Tue May 14 04:21:20 2024 00:24:06.479 read: IOPS=3102, BW=12.1MiB/s (12.7MB/s)(31.3MiB/2583msec) 00:24:06.479 slat (nsec): min=2199, max=37400, avg=3907.02, stdev=1979.76 00:24:06.479 clat (usec): min=193, max=42442, avg=317.72, stdev=1395.01 00:24:06.479 lat (usec): min=199, max=42450, avg=321.63, stdev=1395.88 00:24:06.479 clat percentiles (usec): 00:24:06.479 | 1.00th=[ 223], 5.00th=[ 233], 10.00th=[ 237], 20.00th=[ 243], 00:24:06.479 | 30.00th=[ 249], 40.00th=[ 255], 50.00th=[ 265], 60.00th=[ 273], 00:24:06.479 | 70.00th=[ 285], 80.00th=[ 293], 90.00th=[ 310], 95.00th=[ 334], 00:24:06.479 | 99.00th=[ 396], 99.50th=[ 465], 99.90th=[41157], 99.95th=[42206], 00:24:06.479 | 99.99th=[42206] 00:24:06.479 bw ( KiB/s): min= 3576, max=15528, per=55.40%, avg=12398.40, stdev=5043.71, samples=5 00:24:06.479 iops : min= 894, max= 3882, avg=3099.60, stdev=1260.93, samples=5 00:24:06.479 lat (usec) : 250=32.46%, 500=67.24%, 750=0.11%, 1000=0.01% 00:24:06.479 lat (msec) : 2=0.04%, 4=0.01%, 50=0.11% 00:24:06.479 cpu : usr=0.58%, sys=2.13%, ctx=8015, majf=0, minf=2 00:24:06.479 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:24:06.479 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:06.479 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:06.479 issued rwts: total=8015,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:06.479 latency : target=0, window=0, percentile=100.00%, depth=1 00:24:06.479 00:24:06.479 Run status group 0 (all jobs): 00:24:06.479 READ: bw=21.9MiB/s (22.9MB/s), 98.1KiB/s-12.1MiB/s (100kB/s-12.7MB/s), io=65.5MiB (68.7MB), run=2583-2999msec 00:24:06.479 00:24:06.479 Disk stats (read/write): 00:24:06.479 nvme0n1: ios=64/0, merge=0/0, ticks=2584/0, in_queue=2584, util=95.15% 00:24:06.479 nvme0n2: ios=3079/0, merge=0/0, ticks=2791/0, in_queue=2791, util=95.07% 00:24:06.480 nvme0n3: ios=5533/0, merge=0/0, ticks=3501/0, in_queue=3501, util=99.34% 00:24:06.480 nvme0n4: ios=7277/0, merge=0/0, ticks=2289/0, in_queue=2289, util=96.06% 00:24:06.480 04:21:21 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:24:06.480 04:21:21 -- target/fio.sh@66 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:24:06.737 04:21:21 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:24:06.737 04:21:21 -- target/fio.sh@66 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:24:06.995 04:21:21 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:24:06.995 04:21:21 -- target/fio.sh@66 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:24:06.995 04:21:21 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:24:06.995 04:21:21 -- target/fio.sh@66 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:24:07.252 04:21:21 -- target/fio.sh@69 -- # fio_status=0 00:24:07.252 04:21:21 -- target/fio.sh@70 -- # wait 4083848 00:24:07.252 04:21:21 -- target/fio.sh@70 -- # fio_status=4 00:24:07.252 04:21:21 -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:24:07.510 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:24:07.510 04:21:22 -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:24:07.510 04:21:22 -- common/autotest_common.sh@1198 -- # local i=0 00:24:07.510 04:21:22 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:24:07.510 04:21:22 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:24:07.510 04:21:22 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:24:07.510 04:21:22 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:24:07.510 04:21:22 -- common/autotest_common.sh@1210 -- # return 0 00:24:07.510 04:21:22 -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:24:07.510 04:21:22 -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:24:07.510 nvmf hotplug test: fio failed as expected 00:24:07.510 04:21:22 -- target/fio.sh@83 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:07.769 04:21:22 -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:24:07.769 04:21:22 -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:24:07.769 04:21:22 -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:24:07.769 04:21:22 -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:24:07.769 04:21:22 -- target/fio.sh@91 -- # nvmftestfini 00:24:07.769 04:21:22 -- nvmf/common.sh@476 -- # nvmfcleanup 00:24:07.769 04:21:22 -- nvmf/common.sh@116 -- # sync 00:24:07.769 04:21:22 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:24:07.769 04:21:22 -- nvmf/common.sh@119 -- # set +e 00:24:07.769 04:21:22 -- nvmf/common.sh@120 -- # for i in {1..20} 00:24:07.769 04:21:22 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:24:07.769 rmmod nvme_tcp 00:24:07.769 rmmod nvme_fabrics 00:24:07.769 rmmod nvme_keyring 00:24:07.769 04:21:22 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:24:07.769 04:21:22 -- nvmf/common.sh@123 -- # set -e 00:24:07.769 04:21:22 -- nvmf/common.sh@124 -- # return 0 00:24:07.769 04:21:22 -- nvmf/common.sh@477 -- # '[' -n 4079959 ']' 00:24:07.769 04:21:22 -- nvmf/common.sh@478 -- # killprocess 4079959 00:24:07.769 04:21:22 -- common/autotest_common.sh@926 -- # '[' -z 4079959 ']' 00:24:07.769 04:21:22 -- common/autotest_common.sh@930 -- # kill -0 4079959 00:24:07.769 04:21:22 -- common/autotest_common.sh@931 -- # uname 00:24:07.769 04:21:22 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:24:07.769 04:21:22 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 4079959 00:24:07.769 04:21:22 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:24:07.769 04:21:22 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:24:07.769 04:21:22 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 4079959' 00:24:07.769 killing process with pid 4079959 00:24:07.769 04:21:22 -- common/autotest_common.sh@945 -- # kill 4079959 00:24:07.769 04:21:22 -- common/autotest_common.sh@950 -- # wait 4079959 00:24:08.336 04:21:22 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:24:08.336 04:21:22 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:24:08.336 04:21:22 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:24:08.336 04:21:22 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:08.336 04:21:22 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:24:08.336 04:21:22 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:08.336 04:21:22 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:08.336 04:21:22 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:10.870 04:21:24 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:24:10.870 00:24:10.870 real 0m26.471s 00:24:10.870 user 2m30.593s 00:24:10.870 sys 0m7.704s 00:24:10.870 04:21:24 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:24:10.870 04:21:24 -- common/autotest_common.sh@10 -- # set +x 00:24:10.870 ************************************ 00:24:10.870 END TEST nvmf_fio_target 00:24:10.870 ************************************ 00:24:10.870 04:21:24 -- nvmf/nvmf.sh@55 -- # run_test nvmf_bdevio /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:24:10.870 04:21:24 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:24:10.870 04:21:24 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:24:10.870 04:21:24 -- common/autotest_common.sh@10 -- # set +x 00:24:10.870 ************************************ 00:24:10.870 START TEST nvmf_bdevio 00:24:10.870 ************************************ 00:24:10.870 04:21:24 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:24:10.870 * Looking for test storage... 00:24:10.870 * Found test storage at /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target 00:24:10.870 04:21:24 -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/common.sh 00:24:10.870 04:21:24 -- nvmf/common.sh@7 -- # uname -s 00:24:10.870 04:21:24 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:10.870 04:21:24 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:10.870 04:21:24 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:10.870 04:21:24 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:10.870 04:21:24 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:10.870 04:21:24 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:10.870 04:21:24 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:10.870 04:21:24 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:10.870 04:21:24 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:10.870 04:21:24 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:10.870 04:21:24 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80ef6226-405e-ee11-906e-a4bf01973fda 00:24:10.870 04:21:24 -- nvmf/common.sh@18 -- # NVME_HOSTID=80ef6226-405e-ee11-906e-a4bf01973fda 00:24:10.870 04:21:24 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:10.870 04:21:24 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:10.870 04:21:24 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:24:10.870 04:21:24 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/common.sh 00:24:10.870 04:21:24 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:10.870 04:21:24 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:10.870 04:21:24 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:10.870 04:21:24 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:10.870 04:21:24 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:10.870 04:21:24 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:10.870 04:21:24 -- paths/export.sh@5 -- # export PATH 00:24:10.870 04:21:24 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:10.870 04:21:24 -- nvmf/common.sh@46 -- # : 0 00:24:10.870 04:21:24 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:24:10.870 04:21:24 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:24:10.870 04:21:24 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:24:10.870 04:21:24 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:10.870 04:21:24 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:10.870 04:21:24 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:24:10.870 04:21:24 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:24:10.870 04:21:24 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:24:10.870 04:21:24 -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:24:10.870 04:21:24 -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:24:10.870 04:21:24 -- target/bdevio.sh@14 -- # nvmftestinit 00:24:10.870 04:21:24 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:24:10.870 04:21:24 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:10.870 04:21:24 -- nvmf/common.sh@436 -- # prepare_net_devs 00:24:10.870 04:21:24 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:24:10.870 04:21:24 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:24:10.870 04:21:24 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:10.870 04:21:24 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:10.870 04:21:24 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:10.870 04:21:24 -- nvmf/common.sh@402 -- # [[ phy-fallback != virt ]] 00:24:10.870 04:21:24 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:24:10.870 04:21:24 -- nvmf/common.sh@284 -- # xtrace_disable 00:24:10.870 04:21:24 -- common/autotest_common.sh@10 -- # set +x 00:24:16.139 04:21:29 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:24:16.139 04:21:29 -- nvmf/common.sh@290 -- # pci_devs=() 00:24:16.139 04:21:29 -- nvmf/common.sh@290 -- # local -a pci_devs 00:24:16.139 04:21:29 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:24:16.139 04:21:29 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:24:16.139 04:21:29 -- nvmf/common.sh@292 -- # pci_drivers=() 00:24:16.139 04:21:29 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:24:16.139 04:21:29 -- nvmf/common.sh@294 -- # net_devs=() 00:24:16.139 04:21:29 -- nvmf/common.sh@294 -- # local -ga net_devs 00:24:16.139 04:21:29 -- nvmf/common.sh@295 -- # e810=() 00:24:16.139 04:21:29 -- nvmf/common.sh@295 -- # local -ga e810 00:24:16.139 04:21:29 -- nvmf/common.sh@296 -- # x722=() 00:24:16.139 04:21:29 -- nvmf/common.sh@296 -- # local -ga x722 00:24:16.139 04:21:30 -- nvmf/common.sh@297 -- # mlx=() 00:24:16.139 04:21:30 -- nvmf/common.sh@297 -- # local -ga mlx 00:24:16.139 04:21:30 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:16.139 04:21:30 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:16.139 04:21:30 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:16.139 04:21:30 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:16.139 04:21:30 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:16.139 04:21:30 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:16.139 04:21:30 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:16.139 04:21:30 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:16.139 04:21:30 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:16.139 04:21:30 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:16.139 04:21:30 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:16.139 04:21:30 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:24:16.139 04:21:30 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:24:16.139 04:21:30 -- nvmf/common.sh@326 -- # [[ '' == mlx5 ]] 00:24:16.139 04:21:30 -- nvmf/common.sh@328 -- # [[ '' == e810 ]] 00:24:16.139 04:21:30 -- nvmf/common.sh@330 -- # [[ '' == x722 ]] 00:24:16.139 04:21:30 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:24:16.139 04:21:30 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:24:16.139 04:21:30 -- nvmf/common.sh@340 -- # echo 'Found 0000:27:00.0 (0x8086 - 0x159b)' 00:24:16.139 Found 0000:27:00.0 (0x8086 - 0x159b) 00:24:16.139 04:21:30 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:24:16.139 04:21:30 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:24:16.139 04:21:30 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:16.139 04:21:30 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:16.139 04:21:30 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:24:16.139 04:21:30 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:24:16.139 04:21:30 -- nvmf/common.sh@340 -- # echo 'Found 0000:27:00.1 (0x8086 - 0x159b)' 00:24:16.139 Found 0000:27:00.1 (0x8086 - 0x159b) 00:24:16.139 04:21:30 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:24:16.139 04:21:30 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:24:16.139 04:21:30 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:16.139 04:21:30 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:16.139 04:21:30 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:24:16.139 04:21:30 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:24:16.139 04:21:30 -- nvmf/common.sh@371 -- # [[ '' == e810 ]] 00:24:16.139 04:21:30 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:24:16.139 04:21:30 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:16.139 04:21:30 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:24:16.139 04:21:30 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:16.139 04:21:30 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:27:00.0: cvl_0_0' 00:24:16.139 Found net devices under 0000:27:00.0: cvl_0_0 00:24:16.139 04:21:30 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:24:16.139 04:21:30 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:24:16.139 04:21:30 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:16.139 04:21:30 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:24:16.139 04:21:30 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:16.139 04:21:30 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:27:00.1: cvl_0_1' 00:24:16.139 Found net devices under 0000:27:00.1: cvl_0_1 00:24:16.139 04:21:30 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:24:16.139 04:21:30 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:24:16.139 04:21:30 -- nvmf/common.sh@402 -- # is_hw=yes 00:24:16.139 04:21:30 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:24:16.139 04:21:30 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:24:16.139 04:21:30 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:24:16.139 04:21:30 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:16.139 04:21:30 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:16.139 04:21:30 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:16.139 04:21:30 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:24:16.139 04:21:30 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:16.139 04:21:30 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:16.139 04:21:30 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:24:16.139 04:21:30 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:16.139 04:21:30 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:16.139 04:21:30 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:24:16.139 04:21:30 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:24:16.139 04:21:30 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:24:16.139 04:21:30 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:16.139 04:21:30 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:16.139 04:21:30 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:16.139 04:21:30 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:24:16.139 04:21:30 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:16.139 04:21:30 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:16.139 04:21:30 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:16.139 04:21:30 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:24:16.139 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:16.139 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.293 ms 00:24:16.139 00:24:16.139 --- 10.0.0.2 ping statistics --- 00:24:16.139 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:16.139 rtt min/avg/max/mdev = 0.293/0.293/0.293/0.000 ms 00:24:16.139 04:21:30 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:16.139 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:16.139 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.072 ms 00:24:16.139 00:24:16.139 --- 10.0.0.1 ping statistics --- 00:24:16.139 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:16.139 rtt min/avg/max/mdev = 0.072/0.072/0.072/0.000 ms 00:24:16.139 04:21:30 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:16.139 04:21:30 -- nvmf/common.sh@410 -- # return 0 00:24:16.139 04:21:30 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:24:16.139 04:21:30 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:16.139 04:21:30 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:24:16.140 04:21:30 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:24:16.140 04:21:30 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:16.140 04:21:30 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:24:16.140 04:21:30 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:24:16.140 04:21:30 -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:24:16.140 04:21:30 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:24:16.140 04:21:30 -- common/autotest_common.sh@712 -- # xtrace_disable 00:24:16.140 04:21:30 -- common/autotest_common.sh@10 -- # set +x 00:24:16.140 04:21:30 -- nvmf/common.sh@469 -- # nvmfpid=4088950 00:24:16.140 04:21:30 -- nvmf/common.sh@470 -- # waitforlisten 4088950 00:24:16.140 04:21:30 -- common/autotest_common.sh@819 -- # '[' -z 4088950 ']' 00:24:16.140 04:21:30 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:16.140 04:21:30 -- common/autotest_common.sh@824 -- # local max_retries=100 00:24:16.140 04:21:30 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:16.140 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:16.140 04:21:30 -- common/autotest_common.sh@828 -- # xtrace_disable 00:24:16.140 04:21:30 -- common/autotest_common.sh@10 -- # set +x 00:24:16.140 04:21:30 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:24:16.140 [2024-05-14 04:21:30.301239] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:24:16.140 [2024-05-14 04:21:30.301340] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:16.140 EAL: No free 2048 kB hugepages reported on node 1 00:24:16.140 [2024-05-14 04:21:30.420649] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:16.140 [2024-05-14 04:21:30.516824] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:24:16.140 [2024-05-14 04:21:30.516991] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:16.140 [2024-05-14 04:21:30.517006] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:16.140 [2024-05-14 04:21:30.517016] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:16.140 [2024-05-14 04:21:30.517239] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:24:16.140 [2024-05-14 04:21:30.517341] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:24:16.140 [2024-05-14 04:21:30.517466] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:24:16.140 [2024-05-14 04:21:30.517492] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:24:16.707 04:21:31 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:24:16.707 04:21:31 -- common/autotest_common.sh@852 -- # return 0 00:24:16.707 04:21:31 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:24:16.707 04:21:31 -- common/autotest_common.sh@718 -- # xtrace_disable 00:24:16.707 04:21:31 -- common/autotest_common.sh@10 -- # set +x 00:24:16.707 04:21:31 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:16.707 04:21:31 -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:24:16.708 04:21:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:16.708 04:21:31 -- common/autotest_common.sh@10 -- # set +x 00:24:16.708 [2024-05-14 04:21:31.047841] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:16.708 04:21:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:16.708 04:21:31 -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:24:16.708 04:21:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:16.708 04:21:31 -- common/autotest_common.sh@10 -- # set +x 00:24:16.708 Malloc0 00:24:16.708 04:21:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:16.708 04:21:31 -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:24:16.708 04:21:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:16.708 04:21:31 -- common/autotest_common.sh@10 -- # set +x 00:24:16.708 04:21:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:16.708 04:21:31 -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:24:16.708 04:21:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:16.708 04:21:31 -- common/autotest_common.sh@10 -- # set +x 00:24:16.708 04:21:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:16.708 04:21:31 -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:16.708 04:21:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:16.708 04:21:31 -- common/autotest_common.sh@10 -- # set +x 00:24:16.708 [2024-05-14 04:21:31.112210] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:16.708 04:21:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:16.708 04:21:31 -- target/bdevio.sh@24 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:24:16.708 04:21:31 -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:24:16.708 04:21:31 -- nvmf/common.sh@520 -- # config=() 00:24:16.708 04:21:31 -- nvmf/common.sh@520 -- # local subsystem config 00:24:16.708 04:21:31 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:24:16.708 04:21:31 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:24:16.708 { 00:24:16.708 "params": { 00:24:16.708 "name": "Nvme$subsystem", 00:24:16.708 "trtype": "$TEST_TRANSPORT", 00:24:16.708 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:16.708 "adrfam": "ipv4", 00:24:16.708 "trsvcid": "$NVMF_PORT", 00:24:16.708 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:16.708 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:16.708 "hdgst": ${hdgst:-false}, 00:24:16.708 "ddgst": ${ddgst:-false} 00:24:16.708 }, 00:24:16.708 "method": "bdev_nvme_attach_controller" 00:24:16.708 } 00:24:16.708 EOF 00:24:16.708 )") 00:24:16.708 04:21:31 -- nvmf/common.sh@542 -- # cat 00:24:16.708 04:21:31 -- nvmf/common.sh@544 -- # jq . 00:24:16.708 04:21:31 -- nvmf/common.sh@545 -- # IFS=, 00:24:16.708 04:21:31 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:24:16.708 "params": { 00:24:16.708 "name": "Nvme1", 00:24:16.708 "trtype": "tcp", 00:24:16.708 "traddr": "10.0.0.2", 00:24:16.708 "adrfam": "ipv4", 00:24:16.708 "trsvcid": "4420", 00:24:16.708 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:16.708 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:24:16.708 "hdgst": false, 00:24:16.708 "ddgst": false 00:24:16.708 }, 00:24:16.708 "method": "bdev_nvme_attach_controller" 00:24:16.708 }' 00:24:16.708 [2024-05-14 04:21:31.184159] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:24:16.708 [2024-05-14 04:21:31.184274] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4089183 ] 00:24:16.708 EAL: No free 2048 kB hugepages reported on node 1 00:24:16.708 [2024-05-14 04:21:31.283315] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:24:16.966 [2024-05-14 04:21:31.374876] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:24:16.966 [2024-05-14 04:21:31.374973] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:16.966 [2024-05-14 04:21:31.374979] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:24:17.224 [2024-05-14 04:21:31.634284] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:24:17.224 [2024-05-14 04:21:31.634322] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:24:17.224 I/O targets: 00:24:17.224 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:24:17.224 00:24:17.224 00:24:17.224 CUnit - A unit testing framework for C - Version 2.1-3 00:24:17.224 http://cunit.sourceforge.net/ 00:24:17.224 00:24:17.224 00:24:17.224 Suite: bdevio tests on: Nvme1n1 00:24:17.224 Test: blockdev write read block ...passed 00:24:17.224 Test: blockdev write zeroes read block ...passed 00:24:17.224 Test: blockdev write zeroes read no split ...passed 00:24:17.224 Test: blockdev write zeroes read split ...passed 00:24:17.224 Test: blockdev write zeroes read split partial ...passed 00:24:17.224 Test: blockdev reset ...[2024-05-14 04:21:31.779879] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:17.224 [2024-05-14 04:21:31.779964] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x613000003140 (9): Bad file descriptor 00:24:17.482 [2024-05-14 04:21:31.923411] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:24:17.482 passed 00:24:17.482 Test: blockdev write read 8 blocks ...passed 00:24:17.482 Test: blockdev write read size > 128k ...passed 00:24:17.482 Test: blockdev write read invalid size ...passed 00:24:17.482 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:24:17.482 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:24:17.482 Test: blockdev write read max offset ...passed 00:24:17.742 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:24:17.742 Test: blockdev writev readv 8 blocks ...passed 00:24:17.742 Test: blockdev writev readv 30 x 1block ...passed 00:24:17.742 Test: blockdev writev readv block ...passed 00:24:17.742 Test: blockdev writev readv size > 128k ...passed 00:24:17.742 Test: blockdev writev readv size > 128k in two iovs ...passed 00:24:17.742 Test: blockdev comparev and writev ...[2024-05-14 04:21:32.182506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:24:17.742 [2024-05-14 04:21:32.182546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:17.742 [2024-05-14 04:21:32.182576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:24:17.742 [2024-05-14 04:21:32.182587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:17.742 [2024-05-14 04:21:32.182961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:24:17.742 [2024-05-14 04:21:32.182971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:24:17.742 [2024-05-14 04:21:32.182984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:24:17.742 [2024-05-14 04:21:32.182992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:24:17.742 [2024-05-14 04:21:32.183383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:24:17.743 [2024-05-14 04:21:32.183393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:24:17.743 [2024-05-14 04:21:32.183406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:24:17.743 [2024-05-14 04:21:32.183414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:24:17.743 [2024-05-14 04:21:32.183776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:24:17.743 [2024-05-14 04:21:32.183785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:24:17.743 [2024-05-14 04:21:32.183798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:24:17.743 [2024-05-14 04:21:32.183807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:24:17.743 passed 00:24:17.743 Test: blockdev nvme passthru rw ...passed 00:24:17.743 Test: blockdev nvme passthru vendor specific ...[2024-05-14 04:21:32.266607] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:24:17.743 [2024-05-14 04:21:32.266629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:24:17.743 [2024-05-14 04:21:32.266792] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:24:17.743 [2024-05-14 04:21:32.266802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:24:17.743 [2024-05-14 04:21:32.266980] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:24:17.743 [2024-05-14 04:21:32.266990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:24:17.743 [2024-05-14 04:21:32.267149] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:24:17.743 [2024-05-14 04:21:32.267158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:24:17.743 passed 00:24:17.743 Test: blockdev nvme admin passthru ...passed 00:24:17.743 Test: blockdev copy ...passed 00:24:17.743 00:24:17.743 Run Summary: Type Total Ran Passed Failed Inactive 00:24:17.743 suites 1 1 n/a 0 0 00:24:17.743 tests 23 23 23 0 0 00:24:17.743 asserts 152 152 152 0 n/a 00:24:17.743 00:24:17.743 Elapsed time = 1.388 seconds 00:24:18.313 04:21:32 -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:18.313 04:21:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:18.313 04:21:32 -- common/autotest_common.sh@10 -- # set +x 00:24:18.313 04:21:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:18.313 04:21:32 -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:24:18.313 04:21:32 -- target/bdevio.sh@30 -- # nvmftestfini 00:24:18.313 04:21:32 -- nvmf/common.sh@476 -- # nvmfcleanup 00:24:18.313 04:21:32 -- nvmf/common.sh@116 -- # sync 00:24:18.313 04:21:32 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:24:18.313 04:21:32 -- nvmf/common.sh@119 -- # set +e 00:24:18.313 04:21:32 -- nvmf/common.sh@120 -- # for i in {1..20} 00:24:18.313 04:21:32 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:24:18.313 rmmod nvme_tcp 00:24:18.313 rmmod nvme_fabrics 00:24:18.313 rmmod nvme_keyring 00:24:18.313 04:21:32 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:24:18.313 04:21:32 -- nvmf/common.sh@123 -- # set -e 00:24:18.313 04:21:32 -- nvmf/common.sh@124 -- # return 0 00:24:18.313 04:21:32 -- nvmf/common.sh@477 -- # '[' -n 4088950 ']' 00:24:18.313 04:21:32 -- nvmf/common.sh@478 -- # killprocess 4088950 00:24:18.313 04:21:32 -- common/autotest_common.sh@926 -- # '[' -z 4088950 ']' 00:24:18.313 04:21:32 -- common/autotest_common.sh@930 -- # kill -0 4088950 00:24:18.313 04:21:32 -- common/autotest_common.sh@931 -- # uname 00:24:18.313 04:21:32 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:24:18.313 04:21:32 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 4088950 00:24:18.313 04:21:32 -- common/autotest_common.sh@932 -- # process_name=reactor_3 00:24:18.313 04:21:32 -- common/autotest_common.sh@936 -- # '[' reactor_3 = sudo ']' 00:24:18.313 04:21:32 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 4088950' 00:24:18.313 killing process with pid 4088950 00:24:18.313 04:21:32 -- common/autotest_common.sh@945 -- # kill 4088950 00:24:18.313 04:21:32 -- common/autotest_common.sh@950 -- # wait 4088950 00:24:18.880 04:21:33 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:24:18.880 04:21:33 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:24:18.880 04:21:33 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:24:18.880 04:21:33 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:18.880 04:21:33 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:24:18.880 04:21:33 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:18.880 04:21:33 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:18.880 04:21:33 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:20.787 04:21:35 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:24:20.787 00:24:20.787 real 0m10.491s 00:24:20.787 user 0m15.033s 00:24:20.787 sys 0m4.587s 00:24:20.787 04:21:35 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:24:20.787 04:21:35 -- common/autotest_common.sh@10 -- # set +x 00:24:20.787 ************************************ 00:24:20.787 END TEST nvmf_bdevio 00:24:20.787 ************************************ 00:24:21.045 04:21:35 -- nvmf/nvmf.sh@57 -- # '[' tcp = tcp ']' 00:24:21.045 04:21:35 -- nvmf/nvmf.sh@58 -- # run_test nvmf_bdevio_no_huge /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:24:21.045 04:21:35 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:24:21.045 04:21:35 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:24:21.045 04:21:35 -- common/autotest_common.sh@10 -- # set +x 00:24:21.045 ************************************ 00:24:21.045 START TEST nvmf_bdevio_no_huge 00:24:21.045 ************************************ 00:24:21.045 04:21:35 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:24:21.045 * Looking for test storage... 00:24:21.045 * Found test storage at /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target 00:24:21.045 04:21:35 -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/common.sh 00:24:21.045 04:21:35 -- nvmf/common.sh@7 -- # uname -s 00:24:21.045 04:21:35 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:21.045 04:21:35 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:21.045 04:21:35 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:21.045 04:21:35 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:21.045 04:21:35 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:21.045 04:21:35 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:21.045 04:21:35 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:21.045 04:21:35 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:21.045 04:21:35 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:21.045 04:21:35 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:21.045 04:21:35 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80ef6226-405e-ee11-906e-a4bf01973fda 00:24:21.045 04:21:35 -- nvmf/common.sh@18 -- # NVME_HOSTID=80ef6226-405e-ee11-906e-a4bf01973fda 00:24:21.045 04:21:35 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:21.045 04:21:35 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:21.045 04:21:35 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:24:21.045 04:21:35 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/common.sh 00:24:21.045 04:21:35 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:21.045 04:21:35 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:21.046 04:21:35 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:21.046 04:21:35 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:21.046 04:21:35 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:21.046 04:21:35 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:21.046 04:21:35 -- paths/export.sh@5 -- # export PATH 00:24:21.046 04:21:35 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:21.046 04:21:35 -- nvmf/common.sh@46 -- # : 0 00:24:21.046 04:21:35 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:24:21.046 04:21:35 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:24:21.046 04:21:35 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:24:21.046 04:21:35 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:21.046 04:21:35 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:21.046 04:21:35 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:24:21.046 04:21:35 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:24:21.046 04:21:35 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:24:21.046 04:21:35 -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:24:21.046 04:21:35 -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:24:21.046 04:21:35 -- target/bdevio.sh@14 -- # nvmftestinit 00:24:21.046 04:21:35 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:24:21.046 04:21:35 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:21.046 04:21:35 -- nvmf/common.sh@436 -- # prepare_net_devs 00:24:21.046 04:21:35 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:24:21.046 04:21:35 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:24:21.046 04:21:35 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:21.046 04:21:35 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:21.046 04:21:35 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:21.046 04:21:35 -- nvmf/common.sh@402 -- # [[ phy-fallback != virt ]] 00:24:21.046 04:21:35 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:24:21.046 04:21:35 -- nvmf/common.sh@284 -- # xtrace_disable 00:24:21.046 04:21:35 -- common/autotest_common.sh@10 -- # set +x 00:24:27.674 04:21:41 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:24:27.674 04:21:41 -- nvmf/common.sh@290 -- # pci_devs=() 00:24:27.674 04:21:41 -- nvmf/common.sh@290 -- # local -a pci_devs 00:24:27.674 04:21:41 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:24:27.674 04:21:41 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:24:27.674 04:21:41 -- nvmf/common.sh@292 -- # pci_drivers=() 00:24:27.674 04:21:41 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:24:27.674 04:21:41 -- nvmf/common.sh@294 -- # net_devs=() 00:24:27.674 04:21:41 -- nvmf/common.sh@294 -- # local -ga net_devs 00:24:27.674 04:21:41 -- nvmf/common.sh@295 -- # e810=() 00:24:27.674 04:21:41 -- nvmf/common.sh@295 -- # local -ga e810 00:24:27.674 04:21:41 -- nvmf/common.sh@296 -- # x722=() 00:24:27.674 04:21:41 -- nvmf/common.sh@296 -- # local -ga x722 00:24:27.674 04:21:41 -- nvmf/common.sh@297 -- # mlx=() 00:24:27.674 04:21:41 -- nvmf/common.sh@297 -- # local -ga mlx 00:24:27.674 04:21:41 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:27.674 04:21:41 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:27.674 04:21:41 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:27.674 04:21:41 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:27.674 04:21:41 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:27.674 04:21:41 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:27.674 04:21:41 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:27.674 04:21:41 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:27.674 04:21:41 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:27.674 04:21:41 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:27.674 04:21:41 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:27.674 04:21:41 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:24:27.674 04:21:41 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:24:27.674 04:21:41 -- nvmf/common.sh@326 -- # [[ '' == mlx5 ]] 00:24:27.674 04:21:41 -- nvmf/common.sh@328 -- # [[ '' == e810 ]] 00:24:27.674 04:21:41 -- nvmf/common.sh@330 -- # [[ '' == x722 ]] 00:24:27.674 04:21:41 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:24:27.674 04:21:41 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:24:27.674 04:21:41 -- nvmf/common.sh@340 -- # echo 'Found 0000:27:00.0 (0x8086 - 0x159b)' 00:24:27.674 Found 0000:27:00.0 (0x8086 - 0x159b) 00:24:27.674 04:21:41 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:24:27.674 04:21:41 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:24:27.674 04:21:41 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:27.674 04:21:41 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:27.674 04:21:41 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:24:27.674 04:21:41 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:24:27.674 04:21:41 -- nvmf/common.sh@340 -- # echo 'Found 0000:27:00.1 (0x8086 - 0x159b)' 00:24:27.674 Found 0000:27:00.1 (0x8086 - 0x159b) 00:24:27.674 04:21:41 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:24:27.674 04:21:41 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:24:27.674 04:21:41 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:27.674 04:21:41 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:27.674 04:21:41 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:24:27.674 04:21:41 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:24:27.674 04:21:41 -- nvmf/common.sh@371 -- # [[ '' == e810 ]] 00:24:27.674 04:21:41 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:24:27.674 04:21:41 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:27.674 04:21:41 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:24:27.674 04:21:41 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:27.674 04:21:41 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:27:00.0: cvl_0_0' 00:24:27.674 Found net devices under 0000:27:00.0: cvl_0_0 00:24:27.674 04:21:41 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:24:27.674 04:21:41 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:24:27.674 04:21:41 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:27.674 04:21:41 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:24:27.674 04:21:41 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:27.674 04:21:41 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:27:00.1: cvl_0_1' 00:24:27.674 Found net devices under 0000:27:00.1: cvl_0_1 00:24:27.674 04:21:41 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:24:27.674 04:21:41 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:24:27.674 04:21:41 -- nvmf/common.sh@402 -- # is_hw=yes 00:24:27.674 04:21:41 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:24:27.674 04:21:41 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:24:27.674 04:21:41 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:24:27.674 04:21:41 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:27.674 04:21:41 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:27.674 04:21:41 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:27.674 04:21:41 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:24:27.674 04:21:41 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:27.674 04:21:41 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:27.674 04:21:41 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:24:27.674 04:21:41 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:27.674 04:21:41 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:27.674 04:21:41 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:24:27.674 04:21:41 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:24:27.674 04:21:41 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:24:27.674 04:21:41 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:27.674 04:21:41 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:27.674 04:21:41 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:27.674 04:21:41 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:24:27.674 04:21:41 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:27.674 04:21:41 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:27.674 04:21:41 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:27.674 04:21:41 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:24:27.674 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:27.674 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.320 ms 00:24:27.674 00:24:27.674 --- 10.0.0.2 ping statistics --- 00:24:27.674 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:27.674 rtt min/avg/max/mdev = 0.320/0.320/0.320/0.000 ms 00:24:27.674 04:21:41 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:27.674 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:27.674 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.085 ms 00:24:27.674 00:24:27.674 --- 10.0.0.1 ping statistics --- 00:24:27.674 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:27.674 rtt min/avg/max/mdev = 0.085/0.085/0.085/0.000 ms 00:24:27.674 04:21:41 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:27.674 04:21:41 -- nvmf/common.sh@410 -- # return 0 00:24:27.674 04:21:41 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:24:27.674 04:21:41 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:27.674 04:21:41 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:24:27.674 04:21:41 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:24:27.675 04:21:41 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:27.675 04:21:41 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:24:27.675 04:21:41 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:24:27.675 04:21:41 -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:24:27.675 04:21:41 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:24:27.675 04:21:41 -- common/autotest_common.sh@712 -- # xtrace_disable 00:24:27.675 04:21:41 -- common/autotest_common.sh@10 -- # set +x 00:24:27.675 04:21:41 -- nvmf/common.sh@469 -- # nvmfpid=4093679 00:24:27.675 04:21:41 -- nvmf/common.sh@470 -- # waitforlisten 4093679 00:24:27.675 04:21:41 -- common/autotest_common.sh@819 -- # '[' -z 4093679 ']' 00:24:27.675 04:21:41 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:27.675 04:21:41 -- common/autotest_common.sh@824 -- # local max_retries=100 00:24:27.675 04:21:41 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:27.675 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:27.675 04:21:41 -- common/autotest_common.sh@828 -- # xtrace_disable 00:24:27.675 04:21:41 -- common/autotest_common.sh@10 -- # set +x 00:24:27.675 04:21:41 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:24:27.675 [2024-05-14 04:21:41.716997] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:24:27.675 [2024-05-14 04:21:41.717112] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:24:27.675 [2024-05-14 04:21:41.861154] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:27.675 [2024-05-14 04:21:41.978317] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:24:27.675 [2024-05-14 04:21:41.978520] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:27.675 [2024-05-14 04:21:41.978535] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:27.675 [2024-05-14 04:21:41.978547] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:27.675 [2024-05-14 04:21:41.978765] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:24:27.675 [2024-05-14 04:21:41.978909] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:24:27.675 [2024-05-14 04:21:41.978897] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:24:27.675 [2024-05-14 04:21:41.978944] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:24:27.934 04:21:42 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:24:27.934 04:21:42 -- common/autotest_common.sh@852 -- # return 0 00:24:27.934 04:21:42 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:24:27.934 04:21:42 -- common/autotest_common.sh@718 -- # xtrace_disable 00:24:27.934 04:21:42 -- common/autotest_common.sh@10 -- # set +x 00:24:27.934 04:21:42 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:27.934 04:21:42 -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:24:27.934 04:21:42 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:27.934 04:21:42 -- common/autotest_common.sh@10 -- # set +x 00:24:27.934 [2024-05-14 04:21:42.472279] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:27.934 04:21:42 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:27.934 04:21:42 -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:24:27.934 04:21:42 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:27.934 04:21:42 -- common/autotest_common.sh@10 -- # set +x 00:24:27.934 Malloc0 00:24:27.934 04:21:42 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:27.934 04:21:42 -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:24:27.934 04:21:42 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:27.934 04:21:42 -- common/autotest_common.sh@10 -- # set +x 00:24:27.934 04:21:42 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:27.934 04:21:42 -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:24:27.934 04:21:42 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:27.934 04:21:42 -- common/autotest_common.sh@10 -- # set +x 00:24:28.194 04:21:42 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:28.194 04:21:42 -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:28.194 04:21:42 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:28.194 04:21:42 -- common/autotest_common.sh@10 -- # set +x 00:24:28.194 [2024-05-14 04:21:42.531300] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:28.194 04:21:42 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:28.194 04:21:42 -- target/bdevio.sh@24 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:24:28.194 04:21:42 -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:24:28.194 04:21:42 -- nvmf/common.sh@520 -- # config=() 00:24:28.194 04:21:42 -- nvmf/common.sh@520 -- # local subsystem config 00:24:28.194 04:21:42 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:24:28.194 04:21:42 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:24:28.194 { 00:24:28.194 "params": { 00:24:28.194 "name": "Nvme$subsystem", 00:24:28.194 "trtype": "$TEST_TRANSPORT", 00:24:28.194 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:28.194 "adrfam": "ipv4", 00:24:28.194 "trsvcid": "$NVMF_PORT", 00:24:28.194 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:28.194 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:28.194 "hdgst": ${hdgst:-false}, 00:24:28.194 "ddgst": ${ddgst:-false} 00:24:28.194 }, 00:24:28.194 "method": "bdev_nvme_attach_controller" 00:24:28.194 } 00:24:28.194 EOF 00:24:28.194 )") 00:24:28.194 04:21:42 -- nvmf/common.sh@542 -- # cat 00:24:28.194 04:21:42 -- nvmf/common.sh@544 -- # jq . 00:24:28.194 04:21:42 -- nvmf/common.sh@545 -- # IFS=, 00:24:28.194 04:21:42 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:24:28.194 "params": { 00:24:28.194 "name": "Nvme1", 00:24:28.194 "trtype": "tcp", 00:24:28.194 "traddr": "10.0.0.2", 00:24:28.194 "adrfam": "ipv4", 00:24:28.194 "trsvcid": "4420", 00:24:28.194 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:28.194 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:24:28.194 "hdgst": false, 00:24:28.194 "ddgst": false 00:24:28.194 }, 00:24:28.194 "method": "bdev_nvme_attach_controller" 00:24:28.194 }' 00:24:28.194 [2024-05-14 04:21:42.614233] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:24:28.194 [2024-05-14 04:21:42.614364] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid4093731 ] 00:24:28.194 [2024-05-14 04:21:42.764088] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:24:28.454 [2024-05-14 04:21:42.882675] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:24:28.454 [2024-05-14 04:21:42.882786] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:28.454 [2024-05-14 04:21:42.882789] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:24:28.715 [2024-05-14 04:21:43.093301] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:24:28.715 [2024-05-14 04:21:43.093341] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:24:28.715 I/O targets: 00:24:28.715 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:24:28.715 00:24:28.715 00:24:28.715 CUnit - A unit testing framework for C - Version 2.1-3 00:24:28.715 http://cunit.sourceforge.net/ 00:24:28.715 00:24:28.715 00:24:28.715 Suite: bdevio tests on: Nvme1n1 00:24:28.715 Test: blockdev write read block ...passed 00:24:28.715 Test: blockdev write zeroes read block ...passed 00:24:28.715 Test: blockdev write zeroes read no split ...passed 00:24:28.715 Test: blockdev write zeroes read split ...passed 00:24:28.715 Test: blockdev write zeroes read split partial ...passed 00:24:28.715 Test: blockdev reset ...[2024-05-14 04:21:43.232769] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:28.715 [2024-05-14 04:21:43.232876] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x613000002f80 (9): Bad file descriptor 00:24:28.715 [2024-05-14 04:21:43.252218] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:24:28.715 passed 00:24:28.715 Test: blockdev write read 8 blocks ...passed 00:24:28.715 Test: blockdev write read size > 128k ...passed 00:24:28.715 Test: blockdev write read invalid size ...passed 00:24:28.973 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:24:28.973 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:24:28.973 Test: blockdev write read max offset ...passed 00:24:28.973 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:24:28.973 Test: blockdev writev readv 8 blocks ...passed 00:24:28.973 Test: blockdev writev readv 30 x 1block ...passed 00:24:28.973 Test: blockdev writev readv block ...passed 00:24:28.974 Test: blockdev writev readv size > 128k ...passed 00:24:28.974 Test: blockdev writev readv size > 128k in two iovs ...passed 00:24:28.974 Test: blockdev comparev and writev ...[2024-05-14 04:21:43.474888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:24:28.974 [2024-05-14 04:21:43.474925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:28.974 [2024-05-14 04:21:43.474942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:24:28.974 [2024-05-14 04:21:43.474952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:28.974 [2024-05-14 04:21:43.475259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:24:28.974 [2024-05-14 04:21:43.475269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:24:28.974 [2024-05-14 04:21:43.475286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:24:28.974 [2024-05-14 04:21:43.475294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:24:28.974 [2024-05-14 04:21:43.475613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:24:28.974 [2024-05-14 04:21:43.475623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:24:28.974 [2024-05-14 04:21:43.475636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:24:28.974 [2024-05-14 04:21:43.475644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:24:28.974 [2024-05-14 04:21:43.475928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:24:28.974 [2024-05-14 04:21:43.475938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:24:28.974 [2024-05-14 04:21:43.475952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:24:28.974 [2024-05-14 04:21:43.475960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:24:28.974 passed 00:24:28.974 Test: blockdev nvme passthru rw ...passed 00:24:28.974 Test: blockdev nvme passthru vendor specific ...[2024-05-14 04:21:43.559696] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:24:28.974 [2024-05-14 04:21:43.559719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:24:28.974 [2024-05-14 04:21:43.559898] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:24:28.974 [2024-05-14 04:21:43.559908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:24:28.974 [2024-05-14 04:21:43.560069] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:24:28.974 [2024-05-14 04:21:43.560078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:24:28.974 [2024-05-14 04:21:43.560251] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:24:28.974 [2024-05-14 04:21:43.560260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:24:28.974 passed 00:24:29.232 Test: blockdev nvme admin passthru ...passed 00:24:29.232 Test: blockdev copy ...passed 00:24:29.232 00:24:29.232 Run Summary: Type Total Ran Passed Failed Inactive 00:24:29.232 suites 1 1 n/a 0 0 00:24:29.232 tests 23 23 23 0 0 00:24:29.232 asserts 152 152 152 0 n/a 00:24:29.232 00:24:29.232 Elapsed time = 1.048 seconds 00:24:29.490 04:21:43 -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:29.490 04:21:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:29.490 04:21:43 -- common/autotest_common.sh@10 -- # set +x 00:24:29.490 04:21:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:29.490 04:21:43 -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:24:29.490 04:21:43 -- target/bdevio.sh@30 -- # nvmftestfini 00:24:29.490 04:21:43 -- nvmf/common.sh@476 -- # nvmfcleanup 00:24:29.490 04:21:43 -- nvmf/common.sh@116 -- # sync 00:24:29.490 04:21:43 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:24:29.490 04:21:43 -- nvmf/common.sh@119 -- # set +e 00:24:29.490 04:21:43 -- nvmf/common.sh@120 -- # for i in {1..20} 00:24:29.490 04:21:43 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:24:29.490 rmmod nvme_tcp 00:24:29.490 rmmod nvme_fabrics 00:24:29.490 rmmod nvme_keyring 00:24:29.490 04:21:44 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:24:29.490 04:21:44 -- nvmf/common.sh@123 -- # set -e 00:24:29.490 04:21:44 -- nvmf/common.sh@124 -- # return 0 00:24:29.490 04:21:44 -- nvmf/common.sh@477 -- # '[' -n 4093679 ']' 00:24:29.490 04:21:44 -- nvmf/common.sh@478 -- # killprocess 4093679 00:24:29.490 04:21:44 -- common/autotest_common.sh@926 -- # '[' -z 4093679 ']' 00:24:29.490 04:21:44 -- common/autotest_common.sh@930 -- # kill -0 4093679 00:24:29.490 04:21:44 -- common/autotest_common.sh@931 -- # uname 00:24:29.490 04:21:44 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:24:29.490 04:21:44 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 4093679 00:24:29.490 04:21:44 -- common/autotest_common.sh@932 -- # process_name=reactor_3 00:24:29.490 04:21:44 -- common/autotest_common.sh@936 -- # '[' reactor_3 = sudo ']' 00:24:29.490 04:21:44 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 4093679' 00:24:29.490 killing process with pid 4093679 00:24:29.490 04:21:44 -- common/autotest_common.sh@945 -- # kill 4093679 00:24:29.490 04:21:44 -- common/autotest_common.sh@950 -- # wait 4093679 00:24:30.059 04:21:44 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:24:30.060 04:21:44 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:24:30.060 04:21:44 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:24:30.060 04:21:44 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:30.060 04:21:44 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:24:30.060 04:21:44 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:30.060 04:21:44 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:30.060 04:21:44 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:31.965 04:21:46 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:24:31.965 00:24:31.965 real 0m11.085s 00:24:31.965 user 0m13.467s 00:24:31.965 sys 0m5.595s 00:24:31.965 04:21:46 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:24:31.965 04:21:46 -- common/autotest_common.sh@10 -- # set +x 00:24:31.965 ************************************ 00:24:31.965 END TEST nvmf_bdevio_no_huge 00:24:31.965 ************************************ 00:24:31.965 04:21:46 -- nvmf/nvmf.sh@59 -- # run_test nvmf_tls /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:24:31.965 04:21:46 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:24:31.965 04:21:46 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:24:31.965 04:21:46 -- common/autotest_common.sh@10 -- # set +x 00:24:31.965 ************************************ 00:24:31.965 START TEST nvmf_tls 00:24:31.965 ************************************ 00:24:31.965 04:21:46 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:24:32.222 * Looking for test storage... 00:24:32.222 * Found test storage at /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target 00:24:32.222 04:21:46 -- target/tls.sh@10 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/common.sh 00:24:32.222 04:21:46 -- nvmf/common.sh@7 -- # uname -s 00:24:32.222 04:21:46 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:32.222 04:21:46 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:32.222 04:21:46 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:32.222 04:21:46 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:32.222 04:21:46 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:32.222 04:21:46 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:32.222 04:21:46 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:32.222 04:21:46 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:32.222 04:21:46 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:32.222 04:21:46 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:32.222 04:21:46 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80ef6226-405e-ee11-906e-a4bf01973fda 00:24:32.222 04:21:46 -- nvmf/common.sh@18 -- # NVME_HOSTID=80ef6226-405e-ee11-906e-a4bf01973fda 00:24:32.222 04:21:46 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:32.222 04:21:46 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:32.222 04:21:46 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:24:32.222 04:21:46 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/common.sh 00:24:32.222 04:21:46 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:32.222 04:21:46 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:32.222 04:21:46 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:32.222 04:21:46 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:32.222 04:21:46 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:32.222 04:21:46 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:32.222 04:21:46 -- paths/export.sh@5 -- # export PATH 00:24:32.222 04:21:46 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:32.222 04:21:46 -- nvmf/common.sh@46 -- # : 0 00:24:32.222 04:21:46 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:24:32.222 04:21:46 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:24:32.222 04:21:46 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:24:32.222 04:21:46 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:32.222 04:21:46 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:32.222 04:21:46 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:24:32.222 04:21:46 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:24:32.222 04:21:46 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:24:32.222 04:21:46 -- target/tls.sh@12 -- # rpc_py=/var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py 00:24:32.222 04:21:46 -- target/tls.sh@71 -- # nvmftestinit 00:24:32.222 04:21:46 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:24:32.222 04:21:46 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:32.222 04:21:46 -- nvmf/common.sh@436 -- # prepare_net_devs 00:24:32.222 04:21:46 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:24:32.222 04:21:46 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:24:32.222 04:21:46 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:32.222 04:21:46 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:32.222 04:21:46 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:32.222 04:21:46 -- nvmf/common.sh@402 -- # [[ phy-fallback != virt ]] 00:24:32.222 04:21:46 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:24:32.222 04:21:46 -- nvmf/common.sh@284 -- # xtrace_disable 00:24:32.222 04:21:46 -- common/autotest_common.sh@10 -- # set +x 00:24:37.491 04:21:51 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:24:37.491 04:21:51 -- nvmf/common.sh@290 -- # pci_devs=() 00:24:37.491 04:21:51 -- nvmf/common.sh@290 -- # local -a pci_devs 00:24:37.491 04:21:51 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:24:37.491 04:21:51 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:24:37.491 04:21:51 -- nvmf/common.sh@292 -- # pci_drivers=() 00:24:37.491 04:21:51 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:24:37.491 04:21:51 -- nvmf/common.sh@294 -- # net_devs=() 00:24:37.491 04:21:51 -- nvmf/common.sh@294 -- # local -ga net_devs 00:24:37.491 04:21:51 -- nvmf/common.sh@295 -- # e810=() 00:24:37.491 04:21:51 -- nvmf/common.sh@295 -- # local -ga e810 00:24:37.491 04:21:51 -- nvmf/common.sh@296 -- # x722=() 00:24:37.491 04:21:51 -- nvmf/common.sh@296 -- # local -ga x722 00:24:37.491 04:21:51 -- nvmf/common.sh@297 -- # mlx=() 00:24:37.491 04:21:51 -- nvmf/common.sh@297 -- # local -ga mlx 00:24:37.491 04:21:51 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:37.491 04:21:51 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:37.491 04:21:51 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:37.491 04:21:51 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:37.491 04:21:51 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:37.491 04:21:51 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:37.491 04:21:51 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:37.491 04:21:51 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:37.491 04:21:51 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:37.491 04:21:51 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:37.491 04:21:51 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:37.491 04:21:51 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:24:37.491 04:21:51 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:24:37.491 04:21:51 -- nvmf/common.sh@326 -- # [[ '' == mlx5 ]] 00:24:37.491 04:21:51 -- nvmf/common.sh@328 -- # [[ '' == e810 ]] 00:24:37.491 04:21:51 -- nvmf/common.sh@330 -- # [[ '' == x722 ]] 00:24:37.491 04:21:51 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:24:37.491 04:21:51 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:24:37.491 04:21:51 -- nvmf/common.sh@340 -- # echo 'Found 0000:27:00.0 (0x8086 - 0x159b)' 00:24:37.491 Found 0000:27:00.0 (0x8086 - 0x159b) 00:24:37.491 04:21:51 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:24:37.491 04:21:51 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:24:37.491 04:21:51 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:37.491 04:21:51 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:37.491 04:21:51 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:24:37.491 04:21:51 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:24:37.491 04:21:51 -- nvmf/common.sh@340 -- # echo 'Found 0000:27:00.1 (0x8086 - 0x159b)' 00:24:37.491 Found 0000:27:00.1 (0x8086 - 0x159b) 00:24:37.491 04:21:51 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:24:37.491 04:21:51 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:24:37.491 04:21:51 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:37.491 04:21:51 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:37.491 04:21:51 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:24:37.491 04:21:51 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:24:37.491 04:21:51 -- nvmf/common.sh@371 -- # [[ '' == e810 ]] 00:24:37.491 04:21:51 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:24:37.491 04:21:51 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:37.491 04:21:51 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:24:37.491 04:21:51 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:37.491 04:21:51 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:27:00.0: cvl_0_0' 00:24:37.491 Found net devices under 0000:27:00.0: cvl_0_0 00:24:37.491 04:21:51 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:24:37.491 04:21:51 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:24:37.491 04:21:51 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:37.491 04:21:51 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:24:37.491 04:21:51 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:37.491 04:21:51 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:27:00.1: cvl_0_1' 00:24:37.491 Found net devices under 0000:27:00.1: cvl_0_1 00:24:37.491 04:21:51 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:24:37.491 04:21:51 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:24:37.491 04:21:51 -- nvmf/common.sh@402 -- # is_hw=yes 00:24:37.491 04:21:51 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:24:37.491 04:21:51 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:24:37.491 04:21:51 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:24:37.491 04:21:51 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:37.491 04:21:51 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:37.491 04:21:51 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:37.491 04:21:51 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:24:37.491 04:21:51 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:37.491 04:21:51 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:37.491 04:21:51 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:24:37.491 04:21:51 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:37.491 04:21:51 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:37.491 04:21:51 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:24:37.491 04:21:51 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:24:37.491 04:21:51 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:24:37.491 04:21:51 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:37.491 04:21:51 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:37.491 04:21:51 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:37.491 04:21:51 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:24:37.491 04:21:51 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:37.491 04:21:52 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:37.491 04:21:52 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:37.491 04:21:52 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:24:37.491 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:37.491 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.228 ms 00:24:37.491 00:24:37.491 --- 10.0.0.2 ping statistics --- 00:24:37.491 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:37.492 rtt min/avg/max/mdev = 0.228/0.228/0.228/0.000 ms 00:24:37.492 04:21:52 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:37.492 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:37.492 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.107 ms 00:24:37.492 00:24:37.492 --- 10.0.0.1 ping statistics --- 00:24:37.492 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:37.492 rtt min/avg/max/mdev = 0.107/0.107/0.107/0.000 ms 00:24:37.492 04:21:52 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:37.492 04:21:52 -- nvmf/common.sh@410 -- # return 0 00:24:37.492 04:21:52 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:24:37.492 04:21:52 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:37.492 04:21:52 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:24:37.492 04:21:52 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:24:37.492 04:21:52 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:37.492 04:21:52 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:24:37.492 04:21:52 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:24:37.492 04:21:52 -- target/tls.sh@72 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:24:37.492 04:21:52 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:24:37.492 04:21:52 -- common/autotest_common.sh@712 -- # xtrace_disable 00:24:37.492 04:21:52 -- common/autotest_common.sh@10 -- # set +x 00:24:37.752 04:21:52 -- nvmf/common.sh@469 -- # nvmfpid=4098179 00:24:37.752 04:21:52 -- nvmf/common.sh@470 -- # waitforlisten 4098179 00:24:37.752 04:21:52 -- common/autotest_common.sh@819 -- # '[' -z 4098179 ']' 00:24:37.752 04:21:52 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:37.752 04:21:52 -- common/autotest_common.sh@824 -- # local max_retries=100 00:24:37.752 04:21:52 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:37.752 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:37.752 04:21:52 -- common/autotest_common.sh@828 -- # xtrace_disable 00:24:37.752 04:21:52 -- common/autotest_common.sh@10 -- # set +x 00:24:37.752 04:21:52 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:24:37.752 [2024-05-14 04:21:52.156463] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:24:37.752 [2024-05-14 04:21:52.156569] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:37.752 EAL: No free 2048 kB hugepages reported on node 1 00:24:37.752 [2024-05-14 04:21:52.287153] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:38.012 [2024-05-14 04:21:52.383969] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:24:38.012 [2024-05-14 04:21:52.384163] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:38.012 [2024-05-14 04:21:52.384177] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:38.012 [2024-05-14 04:21:52.384193] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:38.012 [2024-05-14 04:21:52.384222] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:24:38.583 04:21:52 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:24:38.583 04:21:52 -- common/autotest_common.sh@852 -- # return 0 00:24:38.583 04:21:52 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:24:38.583 04:21:52 -- common/autotest_common.sh@718 -- # xtrace_disable 00:24:38.583 04:21:52 -- common/autotest_common.sh@10 -- # set +x 00:24:38.583 04:21:52 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:38.583 04:21:52 -- target/tls.sh@74 -- # '[' tcp '!=' tcp ']' 00:24:38.583 04:21:52 -- target/tls.sh@79 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:24:38.583 true 00:24:38.583 04:21:53 -- target/tls.sh@82 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:24:38.583 04:21:53 -- target/tls.sh@82 -- # jq -r .tls_version 00:24:38.840 04:21:53 -- target/tls.sh@82 -- # version=0 00:24:38.840 04:21:53 -- target/tls.sh@83 -- # [[ 0 != \0 ]] 00:24:38.840 04:21:53 -- target/tls.sh@89 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:24:38.840 04:21:53 -- target/tls.sh@90 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:24:38.840 04:21:53 -- target/tls.sh@90 -- # jq -r .tls_version 00:24:39.097 04:21:53 -- target/tls.sh@90 -- # version=13 00:24:39.097 04:21:53 -- target/tls.sh@91 -- # [[ 13 != \1\3 ]] 00:24:39.097 04:21:53 -- target/tls.sh@97 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:24:39.097 04:21:53 -- target/tls.sh@98 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:24:39.097 04:21:53 -- target/tls.sh@98 -- # jq -r .tls_version 00:24:39.354 04:21:53 -- target/tls.sh@98 -- # version=7 00:24:39.354 04:21:53 -- target/tls.sh@99 -- # [[ 7 != \7 ]] 00:24:39.354 04:21:53 -- target/tls.sh@105 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:24:39.354 04:21:53 -- target/tls.sh@105 -- # jq -r .enable_ktls 00:24:39.354 04:21:53 -- target/tls.sh@105 -- # ktls=false 00:24:39.354 04:21:53 -- target/tls.sh@106 -- # [[ false != \f\a\l\s\e ]] 00:24:39.354 04:21:53 -- target/tls.sh@112 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:24:39.613 04:21:53 -- target/tls.sh@113 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:24:39.613 04:21:53 -- target/tls.sh@113 -- # jq -r .enable_ktls 00:24:39.613 04:21:54 -- target/tls.sh@113 -- # ktls=true 00:24:39.613 04:21:54 -- target/tls.sh@114 -- # [[ true != \t\r\u\e ]] 00:24:39.613 04:21:54 -- target/tls.sh@120 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:24:39.874 04:21:54 -- target/tls.sh@121 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:24:39.874 04:21:54 -- target/tls.sh@121 -- # jq -r .enable_ktls 00:24:39.874 04:21:54 -- target/tls.sh@121 -- # ktls=false 00:24:39.874 04:21:54 -- target/tls.sh@122 -- # [[ false != \f\a\l\s\e ]] 00:24:39.874 04:21:54 -- target/tls.sh@127 -- # format_interchange_psk 00112233445566778899aabbccddeeff 00:24:39.874 04:21:54 -- target/tls.sh@49 -- # local key hash crc 00:24:39.875 04:21:54 -- target/tls.sh@51 -- # key=00112233445566778899aabbccddeeff 00:24:39.875 04:21:54 -- target/tls.sh@51 -- # hash=01 00:24:39.875 04:21:54 -- target/tls.sh@52 -- # echo -n 00112233445566778899aabbccddeeff 00:24:39.875 04:21:54 -- target/tls.sh@52 -- # tail -c8 00:24:39.875 04:21:54 -- target/tls.sh@52 -- # gzip -1 -c 00:24:39.875 04:21:54 -- target/tls.sh@52 -- # head -c 4 00:24:39.875 04:21:54 -- target/tls.sh@52 -- # crc='p$H�' 00:24:39.875 04:21:54 -- target/tls.sh@54 -- # base64 /dev/fd/62 00:24:39.875 04:21:54 -- target/tls.sh@54 -- # echo -n '00112233445566778899aabbccddeeffp$H�' 00:24:39.875 04:21:54 -- target/tls.sh@54 -- # echo NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:24:39.875 04:21:54 -- target/tls.sh@127 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:24:39.875 04:21:54 -- target/tls.sh@128 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 00:24:39.875 04:21:54 -- target/tls.sh@49 -- # local key hash crc 00:24:39.875 04:21:54 -- target/tls.sh@51 -- # key=ffeeddccbbaa99887766554433221100 00:24:39.875 04:21:54 -- target/tls.sh@51 -- # hash=01 00:24:39.875 04:21:54 -- target/tls.sh@52 -- # echo -n ffeeddccbbaa99887766554433221100 00:24:39.875 04:21:54 -- target/tls.sh@52 -- # gzip -1 -c 00:24:39.875 04:21:54 -- target/tls.sh@52 -- # head -c 4 00:24:39.875 04:21:54 -- target/tls.sh@52 -- # tail -c8 00:24:39.875 04:21:54 -- target/tls.sh@52 -- # crc=$'_\006o\330' 00:24:39.875 04:21:54 -- target/tls.sh@54 -- # base64 /dev/fd/62 00:24:39.875 04:21:54 -- target/tls.sh@54 -- # echo -n $'ffeeddccbbaa99887766554433221100_\006o\330' 00:24:39.875 04:21:54 -- target/tls.sh@54 -- # echo NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:24:39.875 04:21:54 -- target/tls.sh@128 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:24:39.875 04:21:54 -- target/tls.sh@130 -- # key_path=/var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/key1.txt 00:24:39.875 04:21:54 -- target/tls.sh@131 -- # key_2_path=/var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/key2.txt 00:24:39.875 04:21:54 -- target/tls.sh@133 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:24:39.875 04:21:54 -- target/tls.sh@134 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:24:39.875 04:21:54 -- target/tls.sh@136 -- # chmod 0600 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/key1.txt 00:24:39.875 04:21:54 -- target/tls.sh@137 -- # chmod 0600 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/key2.txt 00:24:39.875 04:21:54 -- target/tls.sh@139 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:24:40.135 04:21:54 -- target/tls.sh@140 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py framework_start_init 00:24:40.397 04:21:54 -- target/tls.sh@142 -- # setup_nvmf_tgt /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/key1.txt 00:24:40.397 04:21:54 -- target/tls.sh@58 -- # local key=/var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/key1.txt 00:24:40.397 04:21:54 -- target/tls.sh@60 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:24:40.656 [2024-05-14 04:21:54.992399] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:40.656 04:21:55 -- target/tls.sh@61 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:24:40.656 04:21:55 -- target/tls.sh@62 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:24:40.914 [2024-05-14 04:21:55.292417] tcp.c: 912:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:24:40.914 [2024-05-14 04:21:55.292655] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:40.914 04:21:55 -- target/tls.sh@64 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:24:40.914 malloc0 00:24:40.914 04:21:55 -- target/tls.sh@65 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:24:41.172 04:21:55 -- target/tls.sh@67 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/key1.txt 00:24:41.172 04:21:55 -- target/tls.sh@146 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/key1.txt 00:24:41.431 EAL: No free 2048 kB hugepages reported on node 1 00:24:51.435 Initializing NVMe Controllers 00:24:51.435 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:24:51.435 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:24:51.435 Initialization complete. Launching workers. 00:24:51.435 ======================================================== 00:24:51.435 Latency(us) 00:24:51.435 Device Information : IOPS MiB/s Average min max 00:24:51.435 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 17744.34 69.31 3607.06 1019.47 6173.40 00:24:51.435 ======================================================== 00:24:51.435 Total : 17744.34 69.31 3607.06 1019.47 6173.40 00:24:51.435 00:24:51.435 04:22:05 -- target/tls.sh@152 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/key1.txt 00:24:51.435 04:22:05 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:24:51.435 04:22:05 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:24:51.435 04:22:05 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:24:51.435 04:22:05 -- target/tls.sh@23 -- # psk='--psk /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/key1.txt' 00:24:51.435 04:22:05 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:24:51.435 04:22:05 -- target/tls.sh@28 -- # bdevperf_pid=4100837 00:24:51.435 04:22:05 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:24:51.435 04:22:05 -- target/tls.sh@31 -- # waitforlisten 4100837 /var/tmp/bdevperf.sock 00:24:51.435 04:22:05 -- common/autotest_common.sh@819 -- # '[' -z 4100837 ']' 00:24:51.435 04:22:05 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:51.435 04:22:05 -- common/autotest_common.sh@824 -- # local max_retries=100 00:24:51.435 04:22:05 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:51.435 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:51.435 04:22:05 -- common/autotest_common.sh@828 -- # xtrace_disable 00:24:51.435 04:22:05 -- common/autotest_common.sh@10 -- # set +x 00:24:51.435 04:22:05 -- target/tls.sh@27 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:24:51.435 [2024-05-14 04:22:05.965702] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:24:51.435 [2024-05-14 04:22:05.965832] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4100837 ] 00:24:51.734 EAL: No free 2048 kB hugepages reported on node 1 00:24:51.734 [2024-05-14 04:22:06.098065] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:51.734 [2024-05-14 04:22:06.195103] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:24:52.303 04:22:06 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:24:52.303 04:22:06 -- common/autotest_common.sh@852 -- # return 0 00:24:52.303 04:22:06 -- target/tls.sh@34 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/key1.txt 00:24:52.303 [2024-05-14 04:22:06.801278] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:24:52.303 TLSTESTn1 00:24:52.562 04:22:06 -- target/tls.sh@41 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:24:52.562 Running I/O for 10 seconds... 00:25:02.542 00:25:02.542 Latency(us) 00:25:02.542 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:02.542 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:25:02.542 Verification LBA range: start 0x0 length 0x2000 00:25:02.542 TLSTESTn1 : 10.01 6006.73 23.46 0.00 0.00 21285.17 4759.98 43322.75 00:25:02.542 =================================================================================================================== 00:25:02.542 Total : 6006.73 23.46 0.00 0.00 21285.17 4759.98 43322.75 00:25:02.542 0 00:25:02.542 04:22:16 -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:25:02.542 04:22:16 -- target/tls.sh@45 -- # killprocess 4100837 00:25:02.542 04:22:16 -- common/autotest_common.sh@926 -- # '[' -z 4100837 ']' 00:25:02.542 04:22:16 -- common/autotest_common.sh@930 -- # kill -0 4100837 00:25:02.542 04:22:16 -- common/autotest_common.sh@931 -- # uname 00:25:02.542 04:22:16 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:25:02.542 04:22:16 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 4100837 00:25:02.542 04:22:17 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:25:02.542 04:22:17 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:25:02.542 04:22:17 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 4100837' 00:25:02.542 killing process with pid 4100837 00:25:02.542 04:22:17 -- common/autotest_common.sh@945 -- # kill 4100837 00:25:02.542 Received shutdown signal, test time was about 10.000000 seconds 00:25:02.542 00:25:02.542 Latency(us) 00:25:02.542 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:02.542 =================================================================================================================== 00:25:02.542 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:02.542 04:22:17 -- common/autotest_common.sh@950 -- # wait 4100837 00:25:03.109 04:22:17 -- target/tls.sh@155 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/key2.txt 00:25:03.109 04:22:17 -- common/autotest_common.sh@640 -- # local es=0 00:25:03.109 04:22:17 -- common/autotest_common.sh@642 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/key2.txt 00:25:03.109 04:22:17 -- common/autotest_common.sh@628 -- # local arg=run_bdevperf 00:25:03.109 04:22:17 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:25:03.109 04:22:17 -- common/autotest_common.sh@632 -- # type -t run_bdevperf 00:25:03.109 04:22:17 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:25:03.109 04:22:17 -- common/autotest_common.sh@643 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/key2.txt 00:25:03.109 04:22:17 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:25:03.109 04:22:17 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:25:03.109 04:22:17 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:25:03.109 04:22:17 -- target/tls.sh@23 -- # psk='--psk /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/key2.txt' 00:25:03.109 04:22:17 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:25:03.109 04:22:17 -- target/tls.sh@28 -- # bdevperf_pid=4103105 00:25:03.109 04:22:17 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:25:03.109 04:22:17 -- target/tls.sh@31 -- # waitforlisten 4103105 /var/tmp/bdevperf.sock 00:25:03.109 04:22:17 -- target/tls.sh@27 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:25:03.109 04:22:17 -- common/autotest_common.sh@819 -- # '[' -z 4103105 ']' 00:25:03.109 04:22:17 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:25:03.109 04:22:17 -- common/autotest_common.sh@824 -- # local max_retries=100 00:25:03.109 04:22:17 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:25:03.109 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:25:03.109 04:22:17 -- common/autotest_common.sh@828 -- # xtrace_disable 00:25:03.109 04:22:17 -- common/autotest_common.sh@10 -- # set +x 00:25:03.109 [2024-05-14 04:22:17.472507] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:25:03.109 [2024-05-14 04:22:17.472622] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4103105 ] 00:25:03.109 EAL: No free 2048 kB hugepages reported on node 1 00:25:03.109 [2024-05-14 04:22:17.583770] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:03.109 [2024-05-14 04:22:17.677460] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:25:03.676 04:22:18 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:25:03.676 04:22:18 -- common/autotest_common.sh@852 -- # return 0 00:25:03.676 04:22:18 -- target/tls.sh@34 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/key2.txt 00:25:03.934 [2024-05-14 04:22:18.294883] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:25:03.935 [2024-05-14 04:22:18.304951] /var/jenkins/workspace/dsa-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:25:03.935 [2024-05-14 04:22:18.305393] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x613000003300 (107): Transport endpoint is not connected 00:25:03.935 [2024-05-14 04:22:18.306369] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x613000003300 (9): Bad file descriptor 00:25:03.935 [2024-05-14 04:22:18.307362] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:03.935 [2024-05-14 04:22:18.307382] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:25:03.935 [2024-05-14 04:22:18.307400] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:03.935 request: 00:25:03.935 { 00:25:03.935 "name": "TLSTEST", 00:25:03.935 "trtype": "tcp", 00:25:03.935 "traddr": "10.0.0.2", 00:25:03.935 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:25:03.935 "adrfam": "ipv4", 00:25:03.935 "trsvcid": "4420", 00:25:03.935 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:25:03.935 "psk": "/var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/key2.txt", 00:25:03.935 "method": "bdev_nvme_attach_controller", 00:25:03.935 "req_id": 1 00:25:03.935 } 00:25:03.935 Got JSON-RPC error response 00:25:03.935 response: 00:25:03.935 { 00:25:03.935 "code": -32602, 00:25:03.935 "message": "Invalid parameters" 00:25:03.935 } 00:25:03.935 04:22:18 -- target/tls.sh@36 -- # killprocess 4103105 00:25:03.935 04:22:18 -- common/autotest_common.sh@926 -- # '[' -z 4103105 ']' 00:25:03.935 04:22:18 -- common/autotest_common.sh@930 -- # kill -0 4103105 00:25:03.935 04:22:18 -- common/autotest_common.sh@931 -- # uname 00:25:03.935 04:22:18 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:25:03.935 04:22:18 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 4103105 00:25:03.935 04:22:18 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:25:03.935 04:22:18 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:25:03.935 04:22:18 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 4103105' 00:25:03.935 killing process with pid 4103105 00:25:03.935 04:22:18 -- common/autotest_common.sh@945 -- # kill 4103105 00:25:03.935 Received shutdown signal, test time was about 10.000000 seconds 00:25:03.935 00:25:03.935 Latency(us) 00:25:03.935 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:03.935 =================================================================================================================== 00:25:03.935 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:25:03.935 04:22:18 -- common/autotest_common.sh@950 -- # wait 4103105 00:25:04.194 04:22:18 -- target/tls.sh@37 -- # return 1 00:25:04.194 04:22:18 -- common/autotest_common.sh@643 -- # es=1 00:25:04.194 04:22:18 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:25:04.194 04:22:18 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:25:04.194 04:22:18 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:25:04.194 04:22:18 -- target/tls.sh@158 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/key1.txt 00:25:04.194 04:22:18 -- common/autotest_common.sh@640 -- # local es=0 00:25:04.194 04:22:18 -- common/autotest_common.sh@642 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/key1.txt 00:25:04.194 04:22:18 -- common/autotest_common.sh@628 -- # local arg=run_bdevperf 00:25:04.194 04:22:18 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:25:04.194 04:22:18 -- common/autotest_common.sh@632 -- # type -t run_bdevperf 00:25:04.194 04:22:18 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:25:04.194 04:22:18 -- common/autotest_common.sh@643 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/key1.txt 00:25:04.194 04:22:18 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:25:04.194 04:22:18 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:25:04.194 04:22:18 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:25:04.194 04:22:18 -- target/tls.sh@23 -- # psk='--psk /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/key1.txt' 00:25:04.194 04:22:18 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:25:04.194 04:22:18 -- target/tls.sh@28 -- # bdevperf_pid=4103409 00:25:04.194 04:22:18 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:25:04.194 04:22:18 -- target/tls.sh@31 -- # waitforlisten 4103409 /var/tmp/bdevperf.sock 00:25:04.194 04:22:18 -- common/autotest_common.sh@819 -- # '[' -z 4103409 ']' 00:25:04.194 04:22:18 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:25:04.194 04:22:18 -- common/autotest_common.sh@824 -- # local max_retries=100 00:25:04.194 04:22:18 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:25:04.194 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:25:04.194 04:22:18 -- common/autotest_common.sh@828 -- # xtrace_disable 00:25:04.194 04:22:18 -- common/autotest_common.sh@10 -- # set +x 00:25:04.194 04:22:18 -- target/tls.sh@27 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:25:04.453 [2024-05-14 04:22:18.821439] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:25:04.453 [2024-05-14 04:22:18.821592] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4103409 ] 00:25:04.453 EAL: No free 2048 kB hugepages reported on node 1 00:25:04.453 [2024-05-14 04:22:18.951983] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:04.711 [2024-05-14 04:22:19.048787] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:25:04.969 04:22:19 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:25:04.969 04:22:19 -- common/autotest_common.sh@852 -- # return 0 00:25:04.969 04:22:19 -- target/tls.sh@34 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/key1.txt 00:25:05.228 [2024-05-14 04:22:19.631399] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:25:05.228 [2024-05-14 04:22:19.644720] tcp.c: 866:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:25:05.228 [2024-05-14 04:22:19.644749] posix.c: 583:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:25:05.228 [2024-05-14 04:22:19.644787] /var/jenkins/workspace/dsa-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:25:05.228 [2024-05-14 04:22:19.645197] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x613000003300 (107): Transport endpoint is not connected 00:25:05.228 [2024-05-14 04:22:19.646171] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x613000003300 (9): Bad file descriptor 00:25:05.228 [2024-05-14 04:22:19.647165] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:05.228 [2024-05-14 04:22:19.647181] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:25:05.228 [2024-05-14 04:22:19.647210] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:05.228 request: 00:25:05.228 { 00:25:05.228 "name": "TLSTEST", 00:25:05.228 "trtype": "tcp", 00:25:05.228 "traddr": "10.0.0.2", 00:25:05.228 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:25:05.228 "adrfam": "ipv4", 00:25:05.228 "trsvcid": "4420", 00:25:05.228 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:25:05.228 "psk": "/var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/key1.txt", 00:25:05.228 "method": "bdev_nvme_attach_controller", 00:25:05.228 "req_id": 1 00:25:05.228 } 00:25:05.228 Got JSON-RPC error response 00:25:05.228 response: 00:25:05.228 { 00:25:05.228 "code": -32602, 00:25:05.228 "message": "Invalid parameters" 00:25:05.228 } 00:25:05.228 04:22:19 -- target/tls.sh@36 -- # killprocess 4103409 00:25:05.228 04:22:19 -- common/autotest_common.sh@926 -- # '[' -z 4103409 ']' 00:25:05.228 04:22:19 -- common/autotest_common.sh@930 -- # kill -0 4103409 00:25:05.228 04:22:19 -- common/autotest_common.sh@931 -- # uname 00:25:05.228 04:22:19 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:25:05.228 04:22:19 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 4103409 00:25:05.228 04:22:19 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:25:05.228 04:22:19 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:25:05.228 04:22:19 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 4103409' 00:25:05.228 killing process with pid 4103409 00:25:05.228 04:22:19 -- common/autotest_common.sh@945 -- # kill 4103409 00:25:05.228 Received shutdown signal, test time was about 10.000000 seconds 00:25:05.228 00:25:05.228 Latency(us) 00:25:05.228 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:05.228 =================================================================================================================== 00:25:05.228 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:25:05.228 04:22:19 -- common/autotest_common.sh@950 -- # wait 4103409 00:25:05.488 04:22:20 -- target/tls.sh@37 -- # return 1 00:25:05.488 04:22:20 -- common/autotest_common.sh@643 -- # es=1 00:25:05.488 04:22:20 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:25:05.488 04:22:20 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:25:05.488 04:22:20 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:25:05.488 04:22:20 -- target/tls.sh@161 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/key1.txt 00:25:05.488 04:22:20 -- common/autotest_common.sh@640 -- # local es=0 00:25:05.488 04:22:20 -- common/autotest_common.sh@642 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/key1.txt 00:25:05.488 04:22:20 -- common/autotest_common.sh@628 -- # local arg=run_bdevperf 00:25:05.488 04:22:20 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:25:05.488 04:22:20 -- common/autotest_common.sh@632 -- # type -t run_bdevperf 00:25:05.488 04:22:20 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:25:05.488 04:22:20 -- common/autotest_common.sh@643 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/key1.txt 00:25:05.488 04:22:20 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:25:05.488 04:22:20 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:25:05.488 04:22:20 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:25:05.488 04:22:20 -- target/tls.sh@23 -- # psk='--psk /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/key1.txt' 00:25:05.488 04:22:20 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:25:05.488 04:22:20 -- target/tls.sh@28 -- # bdevperf_pid=4103612 00:25:05.488 04:22:20 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:25:05.488 04:22:20 -- target/tls.sh@31 -- # waitforlisten 4103612 /var/tmp/bdevperf.sock 00:25:05.488 04:22:20 -- common/autotest_common.sh@819 -- # '[' -z 4103612 ']' 00:25:05.488 04:22:20 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:25:05.488 04:22:20 -- common/autotest_common.sh@824 -- # local max_retries=100 00:25:05.488 04:22:20 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:25:05.488 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:25:05.488 04:22:20 -- common/autotest_common.sh@828 -- # xtrace_disable 00:25:05.488 04:22:20 -- common/autotest_common.sh@10 -- # set +x 00:25:05.488 04:22:20 -- target/tls.sh@27 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:25:05.746 [2024-05-14 04:22:20.137496] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:25:05.746 [2024-05-14 04:22:20.137624] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4103612 ] 00:25:05.746 EAL: No free 2048 kB hugepages reported on node 1 00:25:05.746 [2024-05-14 04:22:20.261741] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:06.004 [2024-05-14 04:22:20.352870] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:25:06.571 04:22:20 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:25:06.571 04:22:20 -- common/autotest_common.sh@852 -- # return 0 00:25:06.571 04:22:20 -- target/tls.sh@34 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/key1.txt 00:25:06.571 [2024-05-14 04:22:20.998371] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:25:06.571 [2024-05-14 04:22:21.005802] tcp.c: 866:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:25:06.571 [2024-05-14 04:22:21.005838] posix.c: 583:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:25:06.571 [2024-05-14 04:22:21.005878] /var/jenkins/workspace/dsa-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:25:06.571 [2024-05-14 04:22:21.006216] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x613000003300 (107): Transport endpoint is not connected 00:25:06.571 [2024-05-14 04:22:21.007195] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x613000003300 (9): Bad file descriptor 00:25:06.571 [2024-05-14 04:22:21.008183] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:25:06.571 [2024-05-14 04:22:21.008205] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:25:06.571 [2024-05-14 04:22:21.008219] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:25:06.571 request: 00:25:06.571 { 00:25:06.571 "name": "TLSTEST", 00:25:06.571 "trtype": "tcp", 00:25:06.571 "traddr": "10.0.0.2", 00:25:06.571 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:25:06.571 "adrfam": "ipv4", 00:25:06.571 "trsvcid": "4420", 00:25:06.571 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:25:06.571 "psk": "/var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/key1.txt", 00:25:06.571 "method": "bdev_nvme_attach_controller", 00:25:06.571 "req_id": 1 00:25:06.571 } 00:25:06.571 Got JSON-RPC error response 00:25:06.571 response: 00:25:06.571 { 00:25:06.571 "code": -32602, 00:25:06.571 "message": "Invalid parameters" 00:25:06.571 } 00:25:06.571 04:22:21 -- target/tls.sh@36 -- # killprocess 4103612 00:25:06.571 04:22:21 -- common/autotest_common.sh@926 -- # '[' -z 4103612 ']' 00:25:06.571 04:22:21 -- common/autotest_common.sh@930 -- # kill -0 4103612 00:25:06.571 04:22:21 -- common/autotest_common.sh@931 -- # uname 00:25:06.571 04:22:21 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:25:06.571 04:22:21 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 4103612 00:25:06.571 04:22:21 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:25:06.571 04:22:21 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:25:06.571 04:22:21 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 4103612' 00:25:06.571 killing process with pid 4103612 00:25:06.571 04:22:21 -- common/autotest_common.sh@945 -- # kill 4103612 00:25:06.571 Received shutdown signal, test time was about 10.000000 seconds 00:25:06.571 00:25:06.571 Latency(us) 00:25:06.571 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:06.571 =================================================================================================================== 00:25:06.571 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:25:06.571 04:22:21 -- common/autotest_common.sh@950 -- # wait 4103612 00:25:07.138 04:22:21 -- target/tls.sh@37 -- # return 1 00:25:07.138 04:22:21 -- common/autotest_common.sh@643 -- # es=1 00:25:07.138 04:22:21 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:25:07.138 04:22:21 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:25:07.138 04:22:21 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:25:07.138 04:22:21 -- target/tls.sh@164 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:25:07.138 04:22:21 -- common/autotest_common.sh@640 -- # local es=0 00:25:07.138 04:22:21 -- common/autotest_common.sh@642 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:25:07.138 04:22:21 -- common/autotest_common.sh@628 -- # local arg=run_bdevperf 00:25:07.138 04:22:21 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:25:07.138 04:22:21 -- common/autotest_common.sh@632 -- # type -t run_bdevperf 00:25:07.138 04:22:21 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:25:07.138 04:22:21 -- common/autotest_common.sh@643 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:25:07.138 04:22:21 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:25:07.138 04:22:21 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:25:07.138 04:22:21 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:25:07.138 04:22:21 -- target/tls.sh@23 -- # psk= 00:25:07.138 04:22:21 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:25:07.138 04:22:21 -- target/tls.sh@28 -- # bdevperf_pid=4103857 00:25:07.138 04:22:21 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:25:07.138 04:22:21 -- target/tls.sh@31 -- # waitforlisten 4103857 /var/tmp/bdevperf.sock 00:25:07.138 04:22:21 -- common/autotest_common.sh@819 -- # '[' -z 4103857 ']' 00:25:07.138 04:22:21 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:25:07.138 04:22:21 -- common/autotest_common.sh@824 -- # local max_retries=100 00:25:07.138 04:22:21 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:25:07.138 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:25:07.138 04:22:21 -- common/autotest_common.sh@828 -- # xtrace_disable 00:25:07.138 04:22:21 -- common/autotest_common.sh@10 -- # set +x 00:25:07.138 04:22:21 -- target/tls.sh@27 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:25:07.138 [2024-05-14 04:22:21.492632] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:25:07.138 [2024-05-14 04:22:21.492746] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4103857 ] 00:25:07.138 EAL: No free 2048 kB hugepages reported on node 1 00:25:07.138 [2024-05-14 04:22:21.609889] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:07.138 [2024-05-14 04:22:21.700011] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:25:07.705 04:22:22 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:25:07.705 04:22:22 -- common/autotest_common.sh@852 -- # return 0 00:25:07.705 04:22:22 -- target/tls.sh@34 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:25:07.964 [2024-05-14 04:22:22.325283] /var/jenkins/workspace/dsa-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:25:07.964 [2024-05-14 04:22:22.326818] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x613000003140 (9): Bad file descriptor 00:25:07.964 [2024-05-14 04:22:22.327806] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:07.964 [2024-05-14 04:22:22.327828] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:25:07.964 [2024-05-14 04:22:22.327841] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:07.964 request: 00:25:07.964 { 00:25:07.964 "name": "TLSTEST", 00:25:07.964 "trtype": "tcp", 00:25:07.964 "traddr": "10.0.0.2", 00:25:07.964 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:25:07.964 "adrfam": "ipv4", 00:25:07.964 "trsvcid": "4420", 00:25:07.964 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:25:07.964 "method": "bdev_nvme_attach_controller", 00:25:07.964 "req_id": 1 00:25:07.964 } 00:25:07.964 Got JSON-RPC error response 00:25:07.964 response: 00:25:07.964 { 00:25:07.964 "code": -32602, 00:25:07.964 "message": "Invalid parameters" 00:25:07.964 } 00:25:07.964 04:22:22 -- target/tls.sh@36 -- # killprocess 4103857 00:25:07.964 04:22:22 -- common/autotest_common.sh@926 -- # '[' -z 4103857 ']' 00:25:07.964 04:22:22 -- common/autotest_common.sh@930 -- # kill -0 4103857 00:25:07.964 04:22:22 -- common/autotest_common.sh@931 -- # uname 00:25:07.964 04:22:22 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:25:07.964 04:22:22 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 4103857 00:25:07.964 04:22:22 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:25:07.964 04:22:22 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:25:07.964 04:22:22 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 4103857' 00:25:07.964 killing process with pid 4103857 00:25:07.964 04:22:22 -- common/autotest_common.sh@945 -- # kill 4103857 00:25:07.964 Received shutdown signal, test time was about 10.000000 seconds 00:25:07.964 00:25:07.964 Latency(us) 00:25:07.964 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:07.964 =================================================================================================================== 00:25:07.965 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:25:07.965 04:22:22 -- common/autotest_common.sh@950 -- # wait 4103857 00:25:08.224 04:22:22 -- target/tls.sh@37 -- # return 1 00:25:08.224 04:22:22 -- common/autotest_common.sh@643 -- # es=1 00:25:08.224 04:22:22 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:25:08.224 04:22:22 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:25:08.224 04:22:22 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:25:08.224 04:22:22 -- target/tls.sh@167 -- # killprocess 4098179 00:25:08.224 04:22:22 -- common/autotest_common.sh@926 -- # '[' -z 4098179 ']' 00:25:08.224 04:22:22 -- common/autotest_common.sh@930 -- # kill -0 4098179 00:25:08.224 04:22:22 -- common/autotest_common.sh@931 -- # uname 00:25:08.224 04:22:22 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:25:08.224 04:22:22 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 4098179 00:25:08.224 04:22:22 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:25:08.224 04:22:22 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:25:08.224 04:22:22 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 4098179' 00:25:08.224 killing process with pid 4098179 00:25:08.224 04:22:22 -- common/autotest_common.sh@945 -- # kill 4098179 00:25:08.224 04:22:22 -- common/autotest_common.sh@950 -- # wait 4098179 00:25:08.791 04:22:23 -- target/tls.sh@168 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 02 00:25:08.791 04:22:23 -- target/tls.sh@49 -- # local key hash crc 00:25:08.791 04:22:23 -- target/tls.sh@51 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:25:08.791 04:22:23 -- target/tls.sh@51 -- # hash=02 00:25:08.791 04:22:23 -- target/tls.sh@52 -- # gzip -1 -c 00:25:08.791 04:22:23 -- target/tls.sh@52 -- # head -c 4 00:25:08.791 04:22:23 -- target/tls.sh@52 -- # echo -n 00112233445566778899aabbccddeeff0011223344556677 00:25:08.791 04:22:23 -- target/tls.sh@52 -- # tail -c8 00:25:08.791 04:22:23 -- target/tls.sh@52 -- # crc='�e�'\''' 00:25:08.791 04:22:23 -- target/tls.sh@54 -- # base64 /dev/fd/62 00:25:08.791 04:22:23 -- target/tls.sh@54 -- # echo -n '00112233445566778899aabbccddeeff0011223344556677�e�'\''' 00:25:09.049 04:22:23 -- target/tls.sh@54 -- # echo NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:25:09.050 04:22:23 -- target/tls.sh@168 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:25:09.050 04:22:23 -- target/tls.sh@169 -- # key_long_path=/var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:25:09.050 04:22:23 -- target/tls.sh@170 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:25:09.050 04:22:23 -- target/tls.sh@171 -- # chmod 0600 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:25:09.050 04:22:23 -- target/tls.sh@172 -- # nvmfappstart -m 0x2 00:25:09.050 04:22:23 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:25:09.050 04:22:23 -- common/autotest_common.sh@712 -- # xtrace_disable 00:25:09.050 04:22:23 -- common/autotest_common.sh@10 -- # set +x 00:25:09.050 04:22:23 -- nvmf/common.sh@469 -- # nvmfpid=4104367 00:25:09.050 04:22:23 -- nvmf/common.sh@470 -- # waitforlisten 4104367 00:25:09.050 04:22:23 -- common/autotest_common.sh@819 -- # '[' -z 4104367 ']' 00:25:09.050 04:22:23 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:09.050 04:22:23 -- common/autotest_common.sh@824 -- # local max_retries=100 00:25:09.050 04:22:23 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:09.050 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:09.050 04:22:23 -- common/autotest_common.sh@828 -- # xtrace_disable 00:25:09.050 04:22:23 -- common/autotest_common.sh@10 -- # set +x 00:25:09.050 04:22:23 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:25:09.050 [2024-05-14 04:22:23.463969] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:25:09.050 [2024-05-14 04:22:23.464077] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:09.050 EAL: No free 2048 kB hugepages reported on node 1 00:25:09.050 [2024-05-14 04:22:23.584112] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:09.307 [2024-05-14 04:22:23.674563] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:25:09.307 [2024-05-14 04:22:23.674724] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:09.307 [2024-05-14 04:22:23.674736] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:09.307 [2024-05-14 04:22:23.674744] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:09.307 [2024-05-14 04:22:23.674772] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:25:09.564 04:22:24 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:25:09.564 04:22:24 -- common/autotest_common.sh@852 -- # return 0 00:25:09.564 04:22:24 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:25:09.564 04:22:24 -- common/autotest_common.sh@718 -- # xtrace_disable 00:25:09.564 04:22:24 -- common/autotest_common.sh@10 -- # set +x 00:25:09.821 04:22:24 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:09.821 04:22:24 -- target/tls.sh@174 -- # setup_nvmf_tgt /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:25:09.821 04:22:24 -- target/tls.sh@58 -- # local key=/var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:25:09.821 04:22:24 -- target/tls.sh@60 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:25:09.821 [2024-05-14 04:22:24.308327] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:09.821 04:22:24 -- target/tls.sh@61 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:25:10.079 04:22:24 -- target/tls.sh@62 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:25:10.079 [2024-05-14 04:22:24.592389] tcp.c: 912:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:25:10.079 [2024-05-14 04:22:24.592615] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:10.079 04:22:24 -- target/tls.sh@64 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:25:10.336 malloc0 00:25:10.336 04:22:24 -- target/tls.sh@65 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:25:10.336 04:22:24 -- target/tls.sh@67 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:25:10.594 04:22:25 -- target/tls.sh@176 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:25:10.594 04:22:25 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:25:10.594 04:22:25 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:25:10.594 04:22:25 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:25:10.594 04:22:25 -- target/tls.sh@23 -- # psk='--psk /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/key_long.txt' 00:25:10.594 04:22:25 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:25:10.594 04:22:25 -- target/tls.sh@28 -- # bdevperf_pid=4104695 00:25:10.594 04:22:25 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:25:10.594 04:22:25 -- target/tls.sh@31 -- # waitforlisten 4104695 /var/tmp/bdevperf.sock 00:25:10.594 04:22:25 -- common/autotest_common.sh@819 -- # '[' -z 4104695 ']' 00:25:10.594 04:22:25 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:25:10.594 04:22:25 -- common/autotest_common.sh@824 -- # local max_retries=100 00:25:10.594 04:22:25 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:25:10.594 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:25:10.594 04:22:25 -- common/autotest_common.sh@828 -- # xtrace_disable 00:25:10.594 04:22:25 -- common/autotest_common.sh@10 -- # set +x 00:25:10.594 04:22:25 -- target/tls.sh@27 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:25:10.594 [2024-05-14 04:22:25.111333] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:25:10.594 [2024-05-14 04:22:25.111452] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4104695 ] 00:25:10.852 EAL: No free 2048 kB hugepages reported on node 1 00:25:10.852 [2024-05-14 04:22:25.231469] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:10.852 [2024-05-14 04:22:25.321942] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:25:11.417 04:22:25 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:25:11.417 04:22:25 -- common/autotest_common.sh@852 -- # return 0 00:25:11.417 04:22:25 -- target/tls.sh@34 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:25:11.417 [2024-05-14 04:22:25.950653] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:25:11.675 TLSTESTn1 00:25:11.675 04:22:26 -- target/tls.sh@41 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:25:11.675 Running I/O for 10 seconds... 00:25:21.685 00:25:21.685 Latency(us) 00:25:21.685 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:21.685 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:25:21.685 Verification LBA range: start 0x0 length 0x2000 00:25:21.685 TLSTESTn1 : 10.01 6411.13 25.04 0.00 0.00 19946.11 3069.84 43046.80 00:25:21.685 =================================================================================================================== 00:25:21.685 Total : 6411.13 25.04 0.00 0.00 19946.11 3069.84 43046.80 00:25:21.685 0 00:25:21.685 04:22:36 -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:25:21.685 04:22:36 -- target/tls.sh@45 -- # killprocess 4104695 00:25:21.685 04:22:36 -- common/autotest_common.sh@926 -- # '[' -z 4104695 ']' 00:25:21.685 04:22:36 -- common/autotest_common.sh@930 -- # kill -0 4104695 00:25:21.685 04:22:36 -- common/autotest_common.sh@931 -- # uname 00:25:21.685 04:22:36 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:25:21.685 04:22:36 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 4104695 00:25:21.685 04:22:36 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:25:21.685 04:22:36 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:25:21.685 04:22:36 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 4104695' 00:25:21.685 killing process with pid 4104695 00:25:21.685 04:22:36 -- common/autotest_common.sh@945 -- # kill 4104695 00:25:21.685 Received shutdown signal, test time was about 10.000000 seconds 00:25:21.685 00:25:21.685 Latency(us) 00:25:21.685 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:21.685 =================================================================================================================== 00:25:21.685 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:21.685 04:22:36 -- common/autotest_common.sh@950 -- # wait 4104695 00:25:22.253 04:22:36 -- target/tls.sh@179 -- # chmod 0666 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:25:22.253 04:22:36 -- target/tls.sh@180 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:25:22.253 04:22:36 -- common/autotest_common.sh@640 -- # local es=0 00:25:22.253 04:22:36 -- common/autotest_common.sh@642 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:25:22.253 04:22:36 -- common/autotest_common.sh@628 -- # local arg=run_bdevperf 00:25:22.253 04:22:36 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:25:22.253 04:22:36 -- common/autotest_common.sh@632 -- # type -t run_bdevperf 00:25:22.253 04:22:36 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:25:22.253 04:22:36 -- common/autotest_common.sh@643 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:25:22.253 04:22:36 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:25:22.253 04:22:36 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:25:22.253 04:22:36 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:25:22.253 04:22:36 -- target/tls.sh@23 -- # psk='--psk /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/key_long.txt' 00:25:22.253 04:22:36 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:25:22.253 04:22:36 -- target/tls.sh@28 -- # bdevperf_pid=4106822 00:25:22.253 04:22:36 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:25:22.253 04:22:36 -- target/tls.sh@31 -- # waitforlisten 4106822 /var/tmp/bdevperf.sock 00:25:22.253 04:22:36 -- common/autotest_common.sh@819 -- # '[' -z 4106822 ']' 00:25:22.253 04:22:36 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:25:22.253 04:22:36 -- common/autotest_common.sh@824 -- # local max_retries=100 00:25:22.253 04:22:36 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:25:22.253 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:25:22.253 04:22:36 -- common/autotest_common.sh@828 -- # xtrace_disable 00:25:22.253 04:22:36 -- common/autotest_common.sh@10 -- # set +x 00:25:22.253 04:22:36 -- target/tls.sh@27 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:25:22.253 [2024-05-14 04:22:36.648795] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:25:22.253 [2024-05-14 04:22:36.648943] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4106822 ] 00:25:22.253 EAL: No free 2048 kB hugepages reported on node 1 00:25:22.253 [2024-05-14 04:22:36.779042] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:22.512 [2024-05-14 04:22:36.877218] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:25:23.080 04:22:37 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:25:23.080 04:22:37 -- common/autotest_common.sh@852 -- # return 0 00:25:23.080 04:22:37 -- target/tls.sh@34 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:25:23.080 [2024-05-14 04:22:37.489178] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:25:23.080 [2024-05-14 04:22:37.489230] bdev_nvme_rpc.c: 336:tcp_load_psk: *ERROR*: Incorrect permissions for PSK file 00:25:23.080 request: 00:25:23.080 { 00:25:23.080 "name": "TLSTEST", 00:25:23.080 "trtype": "tcp", 00:25:23.080 "traddr": "10.0.0.2", 00:25:23.080 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:25:23.080 "adrfam": "ipv4", 00:25:23.080 "trsvcid": "4420", 00:25:23.080 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:25:23.080 "psk": "/var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/key_long.txt", 00:25:23.080 "method": "bdev_nvme_attach_controller", 00:25:23.080 "req_id": 1 00:25:23.080 } 00:25:23.080 Got JSON-RPC error response 00:25:23.080 response: 00:25:23.080 { 00:25:23.080 "code": -22, 00:25:23.080 "message": "Could not retrieve PSK from file: /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/key_long.txt" 00:25:23.080 } 00:25:23.080 04:22:37 -- target/tls.sh@36 -- # killprocess 4106822 00:25:23.080 04:22:37 -- common/autotest_common.sh@926 -- # '[' -z 4106822 ']' 00:25:23.080 04:22:37 -- common/autotest_common.sh@930 -- # kill -0 4106822 00:25:23.080 04:22:37 -- common/autotest_common.sh@931 -- # uname 00:25:23.080 04:22:37 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:25:23.080 04:22:37 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 4106822 00:25:23.080 04:22:37 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:25:23.080 04:22:37 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:25:23.080 04:22:37 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 4106822' 00:25:23.080 killing process with pid 4106822 00:25:23.080 04:22:37 -- common/autotest_common.sh@945 -- # kill 4106822 00:25:23.080 Received shutdown signal, test time was about 10.000000 seconds 00:25:23.080 00:25:23.080 Latency(us) 00:25:23.080 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:23.080 =================================================================================================================== 00:25:23.080 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:25:23.080 04:22:37 -- common/autotest_common.sh@950 -- # wait 4106822 00:25:23.339 04:22:37 -- target/tls.sh@37 -- # return 1 00:25:23.339 04:22:37 -- common/autotest_common.sh@643 -- # es=1 00:25:23.339 04:22:37 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:25:23.339 04:22:37 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:25:23.339 04:22:37 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:25:23.339 04:22:37 -- target/tls.sh@183 -- # killprocess 4104367 00:25:23.339 04:22:37 -- common/autotest_common.sh@926 -- # '[' -z 4104367 ']' 00:25:23.339 04:22:37 -- common/autotest_common.sh@930 -- # kill -0 4104367 00:25:23.339 04:22:37 -- common/autotest_common.sh@931 -- # uname 00:25:23.339 04:22:37 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:25:23.339 04:22:37 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 4104367 00:25:23.598 04:22:37 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:25:23.598 04:22:37 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:25:23.598 04:22:37 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 4104367' 00:25:23.598 killing process with pid 4104367 00:25:23.598 04:22:37 -- common/autotest_common.sh@945 -- # kill 4104367 00:25:23.598 04:22:37 -- common/autotest_common.sh@950 -- # wait 4104367 00:25:24.166 04:22:38 -- target/tls.sh@184 -- # nvmfappstart -m 0x2 00:25:24.166 04:22:38 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:25:24.166 04:22:38 -- common/autotest_common.sh@712 -- # xtrace_disable 00:25:24.166 04:22:38 -- common/autotest_common.sh@10 -- # set +x 00:25:24.166 04:22:38 -- nvmf/common.sh@469 -- # nvmfpid=4107176 00:25:24.166 04:22:38 -- nvmf/common.sh@470 -- # waitforlisten 4107176 00:25:24.166 04:22:38 -- common/autotest_common.sh@819 -- # '[' -z 4107176 ']' 00:25:24.166 04:22:38 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:24.166 04:22:38 -- common/autotest_common.sh@824 -- # local max_retries=100 00:25:24.166 04:22:38 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:24.166 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:24.166 04:22:38 -- common/autotest_common.sh@828 -- # xtrace_disable 00:25:24.166 04:22:38 -- common/autotest_common.sh@10 -- # set +x 00:25:24.166 04:22:38 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:25:24.166 [2024-05-14 04:22:38.537574] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:25:24.166 [2024-05-14 04:22:38.537696] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:24.166 EAL: No free 2048 kB hugepages reported on node 1 00:25:24.166 [2024-05-14 04:22:38.675150] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:24.425 [2024-05-14 04:22:38.773924] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:25:24.425 [2024-05-14 04:22:38.774124] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:24.425 [2024-05-14 04:22:38.774139] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:24.425 [2024-05-14 04:22:38.774149] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:24.425 [2024-05-14 04:22:38.774197] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:25:24.683 04:22:39 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:25:24.684 04:22:39 -- common/autotest_common.sh@852 -- # return 0 00:25:24.684 04:22:39 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:25:24.684 04:22:39 -- common/autotest_common.sh@718 -- # xtrace_disable 00:25:24.684 04:22:39 -- common/autotest_common.sh@10 -- # set +x 00:25:24.684 04:22:39 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:24.684 04:22:39 -- target/tls.sh@186 -- # NOT setup_nvmf_tgt /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:25:24.684 04:22:39 -- common/autotest_common.sh@640 -- # local es=0 00:25:24.684 04:22:39 -- common/autotest_common.sh@642 -- # valid_exec_arg setup_nvmf_tgt /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:25:24.684 04:22:39 -- common/autotest_common.sh@628 -- # local arg=setup_nvmf_tgt 00:25:24.684 04:22:39 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:25:24.684 04:22:39 -- common/autotest_common.sh@632 -- # type -t setup_nvmf_tgt 00:25:24.684 04:22:39 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:25:24.684 04:22:39 -- common/autotest_common.sh@643 -- # setup_nvmf_tgt /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:25:24.684 04:22:39 -- target/tls.sh@58 -- # local key=/var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:25:24.684 04:22:39 -- target/tls.sh@60 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:25:24.943 [2024-05-14 04:22:39.387684] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:24.943 04:22:39 -- target/tls.sh@61 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:25:25.203 04:22:39 -- target/tls.sh@62 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:25:25.203 [2024-05-14 04:22:39.647723] tcp.c: 912:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:25:25.203 [2024-05-14 04:22:39.647942] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:25.203 04:22:39 -- target/tls.sh@64 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:25:25.461 malloc0 00:25:25.461 04:22:39 -- target/tls.sh@65 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:25:25.461 04:22:39 -- target/tls.sh@67 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:25:25.720 [2024-05-14 04:22:40.060211] tcp.c:3549:tcp_load_psk: *ERROR*: Incorrect permissions for PSK file 00:25:25.720 [2024-05-14 04:22:40.060247] tcp.c:3618:nvmf_tcp_subsystem_add_host: *ERROR*: Could not retrieve PSK from file 00:25:25.720 [2024-05-14 04:22:40.060266] subsystem.c: 840:spdk_nvmf_subsystem_add_host: *ERROR*: Unable to add host to TCP transport 00:25:25.720 request: 00:25:25.720 { 00:25:25.720 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:25:25.720 "host": "nqn.2016-06.io.spdk:host1", 00:25:25.720 "psk": "/var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/key_long.txt", 00:25:25.720 "method": "nvmf_subsystem_add_host", 00:25:25.720 "req_id": 1 00:25:25.720 } 00:25:25.720 Got JSON-RPC error response 00:25:25.720 response: 00:25:25.720 { 00:25:25.720 "code": -32603, 00:25:25.720 "message": "Internal error" 00:25:25.720 } 00:25:25.720 04:22:40 -- common/autotest_common.sh@643 -- # es=1 00:25:25.720 04:22:40 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:25:25.720 04:22:40 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:25:25.720 04:22:40 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:25:25.720 04:22:40 -- target/tls.sh@189 -- # killprocess 4107176 00:25:25.720 04:22:40 -- common/autotest_common.sh@926 -- # '[' -z 4107176 ']' 00:25:25.720 04:22:40 -- common/autotest_common.sh@930 -- # kill -0 4107176 00:25:25.721 04:22:40 -- common/autotest_common.sh@931 -- # uname 00:25:25.721 04:22:40 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:25:25.721 04:22:40 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 4107176 00:25:25.721 04:22:40 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:25:25.721 04:22:40 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:25:25.721 04:22:40 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 4107176' 00:25:25.721 killing process with pid 4107176 00:25:25.721 04:22:40 -- common/autotest_common.sh@945 -- # kill 4107176 00:25:25.721 04:22:40 -- common/autotest_common.sh@950 -- # wait 4107176 00:25:26.289 04:22:40 -- target/tls.sh@190 -- # chmod 0600 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:25:26.289 04:22:40 -- target/tls.sh@193 -- # nvmfappstart -m 0x2 00:25:26.289 04:22:40 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:25:26.289 04:22:40 -- common/autotest_common.sh@712 -- # xtrace_disable 00:25:26.289 04:22:40 -- common/autotest_common.sh@10 -- # set +x 00:25:26.289 04:22:40 -- nvmf/common.sh@469 -- # nvmfpid=4107770 00:25:26.289 04:22:40 -- nvmf/common.sh@470 -- # waitforlisten 4107770 00:25:26.289 04:22:40 -- common/autotest_common.sh@819 -- # '[' -z 4107770 ']' 00:25:26.289 04:22:40 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:26.289 04:22:40 -- common/autotest_common.sh@824 -- # local max_retries=100 00:25:26.289 04:22:40 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:26.289 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:26.289 04:22:40 -- common/autotest_common.sh@828 -- # xtrace_disable 00:25:26.289 04:22:40 -- common/autotest_common.sh@10 -- # set +x 00:25:26.289 04:22:40 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:25:26.289 [2024-05-14 04:22:40.730024] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:25:26.289 [2024-05-14 04:22:40.730149] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:26.289 EAL: No free 2048 kB hugepages reported on node 1 00:25:26.289 [2024-05-14 04:22:40.858125] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:26.549 [2024-05-14 04:22:40.955249] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:25:26.549 [2024-05-14 04:22:40.955447] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:26.549 [2024-05-14 04:22:40.955461] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:26.549 [2024-05-14 04:22:40.955470] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:26.549 [2024-05-14 04:22:40.955503] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:25:27.116 04:22:41 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:25:27.116 04:22:41 -- common/autotest_common.sh@852 -- # return 0 00:25:27.116 04:22:41 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:25:27.116 04:22:41 -- common/autotest_common.sh@718 -- # xtrace_disable 00:25:27.116 04:22:41 -- common/autotest_common.sh@10 -- # set +x 00:25:27.116 04:22:41 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:27.116 04:22:41 -- target/tls.sh@194 -- # setup_nvmf_tgt /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:25:27.116 04:22:41 -- target/tls.sh@58 -- # local key=/var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:25:27.116 04:22:41 -- target/tls.sh@60 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:25:27.116 [2024-05-14 04:22:41.568188] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:27.116 04:22:41 -- target/tls.sh@61 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:25:27.376 04:22:41 -- target/tls.sh@62 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:25:27.376 [2024-05-14 04:22:41.840254] tcp.c: 912:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:25:27.376 [2024-05-14 04:22:41.840495] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:27.376 04:22:41 -- target/tls.sh@64 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:25:27.635 malloc0 00:25:27.635 04:22:42 -- target/tls.sh@65 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:25:27.635 04:22:42 -- target/tls.sh@67 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:25:27.894 04:22:42 -- target/tls.sh@197 -- # bdevperf_pid=4108097 00:25:27.894 04:22:42 -- target/tls.sh@199 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:25:27.894 04:22:42 -- target/tls.sh@200 -- # waitforlisten 4108097 /var/tmp/bdevperf.sock 00:25:27.894 04:22:42 -- common/autotest_common.sh@819 -- # '[' -z 4108097 ']' 00:25:27.894 04:22:42 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:25:27.894 04:22:42 -- common/autotest_common.sh@824 -- # local max_retries=100 00:25:27.894 04:22:42 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:25:27.895 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:25:27.895 04:22:42 -- common/autotest_common.sh@828 -- # xtrace_disable 00:25:27.895 04:22:42 -- common/autotest_common.sh@10 -- # set +x 00:25:27.895 04:22:42 -- target/tls.sh@196 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:25:27.895 [2024-05-14 04:22:42.341141] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:25:27.895 [2024-05-14 04:22:42.341262] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4108097 ] 00:25:27.895 EAL: No free 2048 kB hugepages reported on node 1 00:25:27.895 [2024-05-14 04:22:42.458414] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:28.154 [2024-05-14 04:22:42.554132] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:25:28.721 04:22:43 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:25:28.721 04:22:43 -- common/autotest_common.sh@852 -- # return 0 00:25:28.721 04:22:43 -- target/tls.sh@201 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:25:28.721 [2024-05-14 04:22:43.164008] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:25:28.721 TLSTESTn1 00:25:28.721 04:22:43 -- target/tls.sh@205 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py save_config 00:25:28.980 04:22:43 -- target/tls.sh@205 -- # tgtconf='{ 00:25:28.980 "subsystems": [ 00:25:28.980 { 00:25:28.980 "subsystem": "iobuf", 00:25:28.980 "config": [ 00:25:28.980 { 00:25:28.980 "method": "iobuf_set_options", 00:25:28.980 "params": { 00:25:28.980 "small_pool_count": 8192, 00:25:28.980 "large_pool_count": 1024, 00:25:28.980 "small_bufsize": 8192, 00:25:28.980 "large_bufsize": 135168 00:25:28.980 } 00:25:28.980 } 00:25:28.980 ] 00:25:28.980 }, 00:25:28.980 { 00:25:28.980 "subsystem": "sock", 00:25:28.980 "config": [ 00:25:28.980 { 00:25:28.980 "method": "sock_impl_set_options", 00:25:28.980 "params": { 00:25:28.980 "impl_name": "posix", 00:25:28.980 "recv_buf_size": 2097152, 00:25:28.980 "send_buf_size": 2097152, 00:25:28.980 "enable_recv_pipe": true, 00:25:28.980 "enable_quickack": false, 00:25:28.980 "enable_placement_id": 0, 00:25:28.980 "enable_zerocopy_send_server": true, 00:25:28.980 "enable_zerocopy_send_client": false, 00:25:28.980 "zerocopy_threshold": 0, 00:25:28.980 "tls_version": 0, 00:25:28.980 "enable_ktls": false 00:25:28.980 } 00:25:28.980 }, 00:25:28.980 { 00:25:28.980 "method": "sock_impl_set_options", 00:25:28.980 "params": { 00:25:28.980 "impl_name": "ssl", 00:25:28.980 "recv_buf_size": 4096, 00:25:28.980 "send_buf_size": 4096, 00:25:28.980 "enable_recv_pipe": true, 00:25:28.980 "enable_quickack": false, 00:25:28.980 "enable_placement_id": 0, 00:25:28.980 "enable_zerocopy_send_server": true, 00:25:28.980 "enable_zerocopy_send_client": false, 00:25:28.980 "zerocopy_threshold": 0, 00:25:28.980 "tls_version": 0, 00:25:28.980 "enable_ktls": false 00:25:28.980 } 00:25:28.980 } 00:25:28.980 ] 00:25:28.980 }, 00:25:28.980 { 00:25:28.980 "subsystem": "vmd", 00:25:28.980 "config": [] 00:25:28.980 }, 00:25:28.980 { 00:25:28.980 "subsystem": "accel", 00:25:28.980 "config": [ 00:25:28.980 { 00:25:28.980 "method": "accel_set_options", 00:25:28.980 "params": { 00:25:28.980 "small_cache_size": 128, 00:25:28.980 "large_cache_size": 16, 00:25:28.980 "task_count": 2048, 00:25:28.980 "sequence_count": 2048, 00:25:28.980 "buf_count": 2048 00:25:28.980 } 00:25:28.980 } 00:25:28.980 ] 00:25:28.980 }, 00:25:28.980 { 00:25:28.980 "subsystem": "bdev", 00:25:28.980 "config": [ 00:25:28.980 { 00:25:28.980 "method": "bdev_set_options", 00:25:28.980 "params": { 00:25:28.980 "bdev_io_pool_size": 65535, 00:25:28.980 "bdev_io_cache_size": 256, 00:25:28.980 "bdev_auto_examine": true, 00:25:28.980 "iobuf_small_cache_size": 128, 00:25:28.980 "iobuf_large_cache_size": 16 00:25:28.980 } 00:25:28.980 }, 00:25:28.980 { 00:25:28.980 "method": "bdev_raid_set_options", 00:25:28.980 "params": { 00:25:28.980 "process_window_size_kb": 1024 00:25:28.980 } 00:25:28.980 }, 00:25:28.980 { 00:25:28.980 "method": "bdev_iscsi_set_options", 00:25:28.980 "params": { 00:25:28.980 "timeout_sec": 30 00:25:28.980 } 00:25:28.980 }, 00:25:28.980 { 00:25:28.980 "method": "bdev_nvme_set_options", 00:25:28.980 "params": { 00:25:28.980 "action_on_timeout": "none", 00:25:28.980 "timeout_us": 0, 00:25:28.980 "timeout_admin_us": 0, 00:25:28.980 "keep_alive_timeout_ms": 10000, 00:25:28.980 "transport_retry_count": 4, 00:25:28.980 "arbitration_burst": 0, 00:25:28.980 "low_priority_weight": 0, 00:25:28.980 "medium_priority_weight": 0, 00:25:28.980 "high_priority_weight": 0, 00:25:28.980 "nvme_adminq_poll_period_us": 10000, 00:25:28.980 "nvme_ioq_poll_period_us": 0, 00:25:28.980 "io_queue_requests": 0, 00:25:28.980 "delay_cmd_submit": true, 00:25:28.980 "bdev_retry_count": 3, 00:25:28.980 "transport_ack_timeout": 0, 00:25:28.980 "ctrlr_loss_timeout_sec": 0, 00:25:28.980 "reconnect_delay_sec": 0, 00:25:28.981 "fast_io_fail_timeout_sec": 0, 00:25:28.981 "generate_uuids": false, 00:25:28.981 "transport_tos": 0, 00:25:28.981 "io_path_stat": false, 00:25:28.981 "allow_accel_sequence": false 00:25:28.981 } 00:25:28.981 }, 00:25:28.981 { 00:25:28.981 "method": "bdev_nvme_set_hotplug", 00:25:28.981 "params": { 00:25:28.981 "period_us": 100000, 00:25:28.981 "enable": false 00:25:28.981 } 00:25:28.981 }, 00:25:28.981 { 00:25:28.981 "method": "bdev_malloc_create", 00:25:28.981 "params": { 00:25:28.981 "name": "malloc0", 00:25:28.981 "num_blocks": 8192, 00:25:28.981 "block_size": 4096, 00:25:28.981 "physical_block_size": 4096, 00:25:28.981 "uuid": "1eaddbc6-7dfc-425a-addb-0cb3ccd6fe90", 00:25:28.981 "optimal_io_boundary": 0 00:25:28.981 } 00:25:28.981 }, 00:25:28.981 { 00:25:28.981 "method": "bdev_wait_for_examine" 00:25:28.981 } 00:25:28.981 ] 00:25:28.981 }, 00:25:28.981 { 00:25:28.981 "subsystem": "nbd", 00:25:28.981 "config": [] 00:25:28.981 }, 00:25:28.981 { 00:25:28.981 "subsystem": "scheduler", 00:25:28.981 "config": [ 00:25:28.981 { 00:25:28.981 "method": "framework_set_scheduler", 00:25:28.981 "params": { 00:25:28.981 "name": "static" 00:25:28.981 } 00:25:28.981 } 00:25:28.981 ] 00:25:28.981 }, 00:25:28.981 { 00:25:28.981 "subsystem": "nvmf", 00:25:28.981 "config": [ 00:25:28.981 { 00:25:28.981 "method": "nvmf_set_config", 00:25:28.981 "params": { 00:25:28.981 "discovery_filter": "match_any", 00:25:28.981 "admin_cmd_passthru": { 00:25:28.981 "identify_ctrlr": false 00:25:28.981 } 00:25:28.981 } 00:25:28.981 }, 00:25:28.981 { 00:25:28.981 "method": "nvmf_set_max_subsystems", 00:25:28.981 "params": { 00:25:28.981 "max_subsystems": 1024 00:25:28.981 } 00:25:28.981 }, 00:25:28.981 { 00:25:28.981 "method": "nvmf_set_crdt", 00:25:28.981 "params": { 00:25:28.981 "crdt1": 0, 00:25:28.981 "crdt2": 0, 00:25:28.981 "crdt3": 0 00:25:28.981 } 00:25:28.981 }, 00:25:28.981 { 00:25:28.981 "method": "nvmf_create_transport", 00:25:28.981 "params": { 00:25:28.981 "trtype": "TCP", 00:25:28.981 "max_queue_depth": 128, 00:25:28.981 "max_io_qpairs_per_ctrlr": 127, 00:25:28.981 "in_capsule_data_size": 4096, 00:25:28.981 "max_io_size": 131072, 00:25:28.981 "io_unit_size": 131072, 00:25:28.981 "max_aq_depth": 128, 00:25:28.981 "num_shared_buffers": 511, 00:25:28.981 "buf_cache_size": 4294967295, 00:25:28.981 "dif_insert_or_strip": false, 00:25:28.981 "zcopy": false, 00:25:28.981 "c2h_success": false, 00:25:28.981 "sock_priority": 0, 00:25:28.981 "abort_timeout_sec": 1 00:25:28.981 } 00:25:28.981 }, 00:25:28.981 { 00:25:28.981 "method": "nvmf_create_subsystem", 00:25:28.981 "params": { 00:25:28.981 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:25:28.981 "allow_any_host": false, 00:25:28.981 "serial_number": "SPDK00000000000001", 00:25:28.981 "model_number": "SPDK bdev Controller", 00:25:28.981 "max_namespaces": 10, 00:25:28.981 "min_cntlid": 1, 00:25:28.981 "max_cntlid": 65519, 00:25:28.981 "ana_reporting": false 00:25:28.981 } 00:25:28.981 }, 00:25:28.981 { 00:25:28.981 "method": "nvmf_subsystem_add_host", 00:25:28.981 "params": { 00:25:28.981 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:25:28.981 "host": "nqn.2016-06.io.spdk:host1", 00:25:28.981 "psk": "/var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/key_long.txt" 00:25:28.981 } 00:25:28.981 }, 00:25:28.981 { 00:25:28.981 "method": "nvmf_subsystem_add_ns", 00:25:28.981 "params": { 00:25:28.981 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:25:28.981 "namespace": { 00:25:28.981 "nsid": 1, 00:25:28.981 "bdev_name": "malloc0", 00:25:28.981 "nguid": "1EADDBC67DFC425AADDB0CB3CCD6FE90", 00:25:28.981 "uuid": "1eaddbc6-7dfc-425a-addb-0cb3ccd6fe90" 00:25:28.981 } 00:25:28.981 } 00:25:28.981 }, 00:25:28.981 { 00:25:28.981 "method": "nvmf_subsystem_add_listener", 00:25:28.981 "params": { 00:25:28.981 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:25:28.981 "listen_address": { 00:25:28.981 "trtype": "TCP", 00:25:28.981 "adrfam": "IPv4", 00:25:28.981 "traddr": "10.0.0.2", 00:25:28.981 "trsvcid": "4420" 00:25:28.981 }, 00:25:28.981 "secure_channel": true 00:25:28.981 } 00:25:28.981 } 00:25:28.981 ] 00:25:28.981 } 00:25:28.981 ] 00:25:28.981 }' 00:25:28.981 04:22:43 -- target/tls.sh@206 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:25:29.239 04:22:43 -- target/tls.sh@206 -- # bdevperfconf='{ 00:25:29.239 "subsystems": [ 00:25:29.239 { 00:25:29.239 "subsystem": "iobuf", 00:25:29.239 "config": [ 00:25:29.239 { 00:25:29.239 "method": "iobuf_set_options", 00:25:29.239 "params": { 00:25:29.239 "small_pool_count": 8192, 00:25:29.239 "large_pool_count": 1024, 00:25:29.239 "small_bufsize": 8192, 00:25:29.239 "large_bufsize": 135168 00:25:29.239 } 00:25:29.239 } 00:25:29.239 ] 00:25:29.239 }, 00:25:29.239 { 00:25:29.239 "subsystem": "sock", 00:25:29.239 "config": [ 00:25:29.239 { 00:25:29.239 "method": "sock_impl_set_options", 00:25:29.239 "params": { 00:25:29.239 "impl_name": "posix", 00:25:29.239 "recv_buf_size": 2097152, 00:25:29.239 "send_buf_size": 2097152, 00:25:29.239 "enable_recv_pipe": true, 00:25:29.239 "enable_quickack": false, 00:25:29.239 "enable_placement_id": 0, 00:25:29.239 "enable_zerocopy_send_server": true, 00:25:29.239 "enable_zerocopy_send_client": false, 00:25:29.239 "zerocopy_threshold": 0, 00:25:29.239 "tls_version": 0, 00:25:29.239 "enable_ktls": false 00:25:29.239 } 00:25:29.239 }, 00:25:29.239 { 00:25:29.239 "method": "sock_impl_set_options", 00:25:29.239 "params": { 00:25:29.239 "impl_name": "ssl", 00:25:29.239 "recv_buf_size": 4096, 00:25:29.239 "send_buf_size": 4096, 00:25:29.239 "enable_recv_pipe": true, 00:25:29.239 "enable_quickack": false, 00:25:29.239 "enable_placement_id": 0, 00:25:29.239 "enable_zerocopy_send_server": true, 00:25:29.239 "enable_zerocopy_send_client": false, 00:25:29.239 "zerocopy_threshold": 0, 00:25:29.239 "tls_version": 0, 00:25:29.239 "enable_ktls": false 00:25:29.239 } 00:25:29.239 } 00:25:29.239 ] 00:25:29.239 }, 00:25:29.239 { 00:25:29.239 "subsystem": "vmd", 00:25:29.239 "config": [] 00:25:29.239 }, 00:25:29.239 { 00:25:29.239 "subsystem": "accel", 00:25:29.239 "config": [ 00:25:29.239 { 00:25:29.239 "method": "accel_set_options", 00:25:29.239 "params": { 00:25:29.239 "small_cache_size": 128, 00:25:29.239 "large_cache_size": 16, 00:25:29.239 "task_count": 2048, 00:25:29.239 "sequence_count": 2048, 00:25:29.239 "buf_count": 2048 00:25:29.239 } 00:25:29.239 } 00:25:29.239 ] 00:25:29.239 }, 00:25:29.239 { 00:25:29.239 "subsystem": "bdev", 00:25:29.239 "config": [ 00:25:29.239 { 00:25:29.239 "method": "bdev_set_options", 00:25:29.239 "params": { 00:25:29.239 "bdev_io_pool_size": 65535, 00:25:29.239 "bdev_io_cache_size": 256, 00:25:29.239 "bdev_auto_examine": true, 00:25:29.239 "iobuf_small_cache_size": 128, 00:25:29.239 "iobuf_large_cache_size": 16 00:25:29.239 } 00:25:29.239 }, 00:25:29.239 { 00:25:29.240 "method": "bdev_raid_set_options", 00:25:29.240 "params": { 00:25:29.240 "process_window_size_kb": 1024 00:25:29.240 } 00:25:29.240 }, 00:25:29.240 { 00:25:29.240 "method": "bdev_iscsi_set_options", 00:25:29.240 "params": { 00:25:29.240 "timeout_sec": 30 00:25:29.240 } 00:25:29.240 }, 00:25:29.240 { 00:25:29.240 "method": "bdev_nvme_set_options", 00:25:29.240 "params": { 00:25:29.240 "action_on_timeout": "none", 00:25:29.240 "timeout_us": 0, 00:25:29.240 "timeout_admin_us": 0, 00:25:29.240 "keep_alive_timeout_ms": 10000, 00:25:29.240 "transport_retry_count": 4, 00:25:29.240 "arbitration_burst": 0, 00:25:29.240 "low_priority_weight": 0, 00:25:29.240 "medium_priority_weight": 0, 00:25:29.240 "high_priority_weight": 0, 00:25:29.240 "nvme_adminq_poll_period_us": 10000, 00:25:29.240 "nvme_ioq_poll_period_us": 0, 00:25:29.240 "io_queue_requests": 512, 00:25:29.240 "delay_cmd_submit": true, 00:25:29.240 "bdev_retry_count": 3, 00:25:29.240 "transport_ack_timeout": 0, 00:25:29.240 "ctrlr_loss_timeout_sec": 0, 00:25:29.240 "reconnect_delay_sec": 0, 00:25:29.240 "fast_io_fail_timeout_sec": 0, 00:25:29.240 "generate_uuids": false, 00:25:29.240 "transport_tos": 0, 00:25:29.240 "io_path_stat": false, 00:25:29.240 "allow_accel_sequence": false 00:25:29.240 } 00:25:29.240 }, 00:25:29.240 { 00:25:29.240 "method": "bdev_nvme_attach_controller", 00:25:29.240 "params": { 00:25:29.240 "name": "TLSTEST", 00:25:29.240 "trtype": "TCP", 00:25:29.240 "adrfam": "IPv4", 00:25:29.240 "traddr": "10.0.0.2", 00:25:29.240 "trsvcid": "4420", 00:25:29.240 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:25:29.240 "prchk_reftag": false, 00:25:29.240 "prchk_guard": false, 00:25:29.240 "ctrlr_loss_timeout_sec": 0, 00:25:29.240 "reconnect_delay_sec": 0, 00:25:29.240 "fast_io_fail_timeout_sec": 0, 00:25:29.240 "psk": "/var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/key_long.txt", 00:25:29.240 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:25:29.240 "hdgst": false, 00:25:29.240 "ddgst": false 00:25:29.240 } 00:25:29.240 }, 00:25:29.240 { 00:25:29.240 "method": "bdev_nvme_set_hotplug", 00:25:29.240 "params": { 00:25:29.240 "period_us": 100000, 00:25:29.240 "enable": false 00:25:29.240 } 00:25:29.240 }, 00:25:29.240 { 00:25:29.240 "method": "bdev_wait_for_examine" 00:25:29.240 } 00:25:29.240 ] 00:25:29.240 }, 00:25:29.240 { 00:25:29.240 "subsystem": "nbd", 00:25:29.240 "config": [] 00:25:29.240 } 00:25:29.240 ] 00:25:29.240 }' 00:25:29.240 04:22:43 -- target/tls.sh@208 -- # killprocess 4108097 00:25:29.240 04:22:43 -- common/autotest_common.sh@926 -- # '[' -z 4108097 ']' 00:25:29.240 04:22:43 -- common/autotest_common.sh@930 -- # kill -0 4108097 00:25:29.240 04:22:43 -- common/autotest_common.sh@931 -- # uname 00:25:29.240 04:22:43 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:25:29.240 04:22:43 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 4108097 00:25:29.240 04:22:43 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:25:29.240 04:22:43 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:25:29.240 04:22:43 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 4108097' 00:25:29.240 killing process with pid 4108097 00:25:29.240 04:22:43 -- common/autotest_common.sh@945 -- # kill 4108097 00:25:29.240 Received shutdown signal, test time was about 10.000000 seconds 00:25:29.240 00:25:29.240 Latency(us) 00:25:29.240 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:29.240 =================================================================================================================== 00:25:29.240 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:25:29.240 04:22:43 -- common/autotest_common.sh@950 -- # wait 4108097 00:25:29.498 04:22:44 -- target/tls.sh@209 -- # killprocess 4107770 00:25:29.498 04:22:44 -- common/autotest_common.sh@926 -- # '[' -z 4107770 ']' 00:25:29.498 04:22:44 -- common/autotest_common.sh@930 -- # kill -0 4107770 00:25:29.498 04:22:44 -- common/autotest_common.sh@931 -- # uname 00:25:29.498 04:22:44 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:25:29.498 04:22:44 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 4107770 00:25:29.498 04:22:44 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:25:29.498 04:22:44 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:25:29.498 04:22:44 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 4107770' 00:25:29.498 killing process with pid 4107770 00:25:29.498 04:22:44 -- common/autotest_common.sh@945 -- # kill 4107770 00:25:29.498 04:22:44 -- common/autotest_common.sh@950 -- # wait 4107770 00:25:30.065 04:22:44 -- target/tls.sh@212 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:25:30.065 04:22:44 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:25:30.065 04:22:44 -- common/autotest_common.sh@712 -- # xtrace_disable 00:25:30.065 04:22:44 -- common/autotest_common.sh@10 -- # set +x 00:25:30.065 04:22:44 -- target/tls.sh@212 -- # echo '{ 00:25:30.065 "subsystems": [ 00:25:30.065 { 00:25:30.065 "subsystem": "iobuf", 00:25:30.065 "config": [ 00:25:30.065 { 00:25:30.065 "method": "iobuf_set_options", 00:25:30.065 "params": { 00:25:30.065 "small_pool_count": 8192, 00:25:30.065 "large_pool_count": 1024, 00:25:30.065 "small_bufsize": 8192, 00:25:30.065 "large_bufsize": 135168 00:25:30.065 } 00:25:30.065 } 00:25:30.065 ] 00:25:30.065 }, 00:25:30.065 { 00:25:30.065 "subsystem": "sock", 00:25:30.065 "config": [ 00:25:30.065 { 00:25:30.065 "method": "sock_impl_set_options", 00:25:30.065 "params": { 00:25:30.065 "impl_name": "posix", 00:25:30.065 "recv_buf_size": 2097152, 00:25:30.065 "send_buf_size": 2097152, 00:25:30.065 "enable_recv_pipe": true, 00:25:30.065 "enable_quickack": false, 00:25:30.065 "enable_placement_id": 0, 00:25:30.065 "enable_zerocopy_send_server": true, 00:25:30.065 "enable_zerocopy_send_client": false, 00:25:30.065 "zerocopy_threshold": 0, 00:25:30.065 "tls_version": 0, 00:25:30.065 "enable_ktls": false 00:25:30.065 } 00:25:30.065 }, 00:25:30.065 { 00:25:30.065 "method": "sock_impl_set_options", 00:25:30.065 "params": { 00:25:30.065 "impl_name": "ssl", 00:25:30.065 "recv_buf_size": 4096, 00:25:30.065 "send_buf_size": 4096, 00:25:30.065 "enable_recv_pipe": true, 00:25:30.065 "enable_quickack": false, 00:25:30.065 "enable_placement_id": 0, 00:25:30.065 "enable_zerocopy_send_server": true, 00:25:30.065 "enable_zerocopy_send_client": false, 00:25:30.065 "zerocopy_threshold": 0, 00:25:30.065 "tls_version": 0, 00:25:30.065 "enable_ktls": false 00:25:30.065 } 00:25:30.065 } 00:25:30.065 ] 00:25:30.065 }, 00:25:30.065 { 00:25:30.065 "subsystem": "vmd", 00:25:30.065 "config": [] 00:25:30.065 }, 00:25:30.065 { 00:25:30.065 "subsystem": "accel", 00:25:30.065 "config": [ 00:25:30.065 { 00:25:30.065 "method": "accel_set_options", 00:25:30.065 "params": { 00:25:30.065 "small_cache_size": 128, 00:25:30.065 "large_cache_size": 16, 00:25:30.065 "task_count": 2048, 00:25:30.065 "sequence_count": 2048, 00:25:30.065 "buf_count": 2048 00:25:30.065 } 00:25:30.065 } 00:25:30.065 ] 00:25:30.065 }, 00:25:30.065 { 00:25:30.065 "subsystem": "bdev", 00:25:30.065 "config": [ 00:25:30.065 { 00:25:30.065 "method": "bdev_set_options", 00:25:30.065 "params": { 00:25:30.065 "bdev_io_pool_size": 65535, 00:25:30.065 "bdev_io_cache_size": 256, 00:25:30.065 "bdev_auto_examine": true, 00:25:30.065 "iobuf_small_cache_size": 128, 00:25:30.065 "iobuf_large_cache_size": 16 00:25:30.065 } 00:25:30.065 }, 00:25:30.065 { 00:25:30.065 "method": "bdev_raid_set_options", 00:25:30.065 "params": { 00:25:30.065 "process_window_size_kb": 1024 00:25:30.065 } 00:25:30.065 }, 00:25:30.065 { 00:25:30.065 "method": "bdev_iscsi_set_options", 00:25:30.065 "params": { 00:25:30.065 "timeout_sec": 30 00:25:30.065 } 00:25:30.065 }, 00:25:30.065 { 00:25:30.065 "method": "bdev_nvme_set_options", 00:25:30.065 "params": { 00:25:30.065 "action_on_timeout": "none", 00:25:30.065 "timeout_us": 0, 00:25:30.065 "timeout_admin_us": 0, 00:25:30.065 "keep_alive_timeout_ms": 10000, 00:25:30.065 "transport_retry_count": 4, 00:25:30.065 "arbitration_burst": 0, 00:25:30.065 "low_priority_weight": 0, 00:25:30.065 "medium_priority_weight": 0, 00:25:30.065 "high_priority_weight": 0, 00:25:30.065 "nvme_adminq_poll_period_us": 10000, 00:25:30.065 "nvme_ioq_poll_period_us": 0, 00:25:30.065 "io_queue_requests": 0, 00:25:30.065 "delay_cmd_submit": true, 00:25:30.065 "bdev_retry_count": 3, 00:25:30.065 "transport_ack_timeout": 0, 00:25:30.065 "ctrlr_loss_timeout_sec": 0, 00:25:30.065 "reconnect_delay_sec": 0, 00:25:30.065 "fast_io_fail_timeout_sec": 0, 00:25:30.065 "generate_uuids": false, 00:25:30.065 "transport_tos": 0, 00:25:30.065 "io_path_stat": false, 00:25:30.065 "allow_accel_sequence": false 00:25:30.065 } 00:25:30.065 }, 00:25:30.065 { 00:25:30.065 "method": "bdev_nvme_set_hotplug", 00:25:30.065 "params": { 00:25:30.065 "period_us": 100000, 00:25:30.066 "enable": false 00:25:30.066 } 00:25:30.066 }, 00:25:30.066 { 00:25:30.066 "method": "bdev_malloc_create", 00:25:30.066 "params": { 00:25:30.066 "name": "malloc0", 00:25:30.066 "num_blocks": 8192, 00:25:30.066 "block_size": 4096, 00:25:30.066 "physical_block_size": 4096, 00:25:30.066 "uuid": "1eaddbc6-7dfc-425a-addb-0cb3ccd6fe90", 00:25:30.066 "optimal_io_boundary": 0 00:25:30.066 } 00:25:30.066 }, 00:25:30.066 { 00:25:30.066 "method": "bdev_wait_for_examine" 00:25:30.066 } 00:25:30.066 ] 00:25:30.066 }, 00:25:30.066 { 00:25:30.066 "subsystem": "nbd", 00:25:30.066 "config": [] 00:25:30.066 }, 00:25:30.066 { 00:25:30.066 "subsystem": "scheduler", 00:25:30.066 "config": [ 00:25:30.066 { 00:25:30.066 "method": "framework_set_scheduler", 00:25:30.066 "params": { 00:25:30.066 "name": "static" 00:25:30.066 } 00:25:30.066 } 00:25:30.066 ] 00:25:30.066 }, 00:25:30.066 { 00:25:30.066 "subsystem": "nvmf", 00:25:30.066 "config": [ 00:25:30.066 { 00:25:30.066 "method": "nvmf_set_config", 00:25:30.066 "params": { 00:25:30.066 "discovery_filter": "match_any", 00:25:30.066 "admin_cmd_passthru": { 00:25:30.066 "identify_ctrlr": false 00:25:30.066 } 00:25:30.066 } 00:25:30.066 }, 00:25:30.066 { 00:25:30.066 "method": "nvmf_set_max_subsystems", 00:25:30.066 "params": { 00:25:30.066 "max_subsystems": 1024 00:25:30.066 } 00:25:30.066 }, 00:25:30.066 { 00:25:30.066 "method": "nvmf_set_crdt", 00:25:30.066 "params": { 00:25:30.066 "crdt1": 0, 00:25:30.066 "crdt2": 0, 00:25:30.066 "crdt3": 0 00:25:30.066 } 00:25:30.066 }, 00:25:30.066 { 00:25:30.066 "method": "nvmf_create_transport", 00:25:30.066 "params": { 00:25:30.066 "trtype": "TCP", 00:25:30.066 "max_queue_depth": 128, 00:25:30.066 "max_io_qpairs_per_ctrlr": 127, 00:25:30.066 "in_capsule_data_size": 4096, 00:25:30.066 "max_io_size": 131072, 00:25:30.066 "io_unit_size": 131072, 00:25:30.066 "max_aq_depth": 128, 00:25:30.066 "num_shared_buffers": 511, 00:25:30.066 "buf_cache_size": 4294967295, 00:25:30.066 "dif_insert_or_strip": false, 00:25:30.066 "zcopy": false, 00:25:30.066 "c2h_success": false, 00:25:30.066 "sock_priority": 0, 00:25:30.066 "abort_timeout_sec": 1 00:25:30.066 } 00:25:30.066 }, 00:25:30.066 { 00:25:30.066 "method": "nvmf_create_subsystem", 00:25:30.066 "params": { 00:25:30.066 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:25:30.066 "allow_any_host": false, 00:25:30.066 "serial_number": "SPDK00000000000001", 00:25:30.066 "model_number": "SPDK bdev Controller", 00:25:30.066 "max_namespaces": 10, 00:25:30.066 "min_cntlid": 1, 00:25:30.066 "max_cntlid": 65519, 00:25:30.066 "ana_reporting": false 00:25:30.066 } 00:25:30.066 }, 00:25:30.066 { 00:25:30.066 "method": "nvmf_subsystem_add_host", 00:25:30.066 "params": { 00:25:30.066 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:25:30.066 "host": "nqn.2016-06.io.spdk:host1", 00:25:30.066 "psk": "/var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/key_long.txt" 00:25:30.066 } 00:25:30.066 }, 00:25:30.066 { 00:25:30.066 "method": "nvmf_subsystem_add_ns", 00:25:30.066 "params": { 00:25:30.066 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:25:30.066 "namespace": { 00:25:30.066 "nsid": 1, 00:25:30.066 "bdev_name": "malloc0", 00:25:30.066 "nguid": "1EADDBC67DFC425AADDB0CB3CCD6FE90", 00:25:30.066 "uuid": "1eaddbc6-7dfc-425a-addb-0cb3ccd6fe90" 00:25:30.066 } 00:25:30.066 } 00:25:30.066 }, 00:25:30.066 { 00:25:30.066 "method": "nvmf_subsystem_add_listener", 00:25:30.066 "params": { 00:25:30.066 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:25:30.066 "listen_address": { 00:25:30.066 "trtype": "TCP", 00:25:30.066 "adrfam": "IPv4", 00:25:30.066 "traddr": "10.0.0.2", 00:25:30.066 "trsvcid": "4420" 00:25:30.066 }, 00:25:30.066 "secure_channel": true 00:25:30.066 } 00:25:30.066 } 00:25:30.066 ] 00:25:30.066 } 00:25:30.066 ] 00:25:30.066 }' 00:25:30.066 04:22:44 -- nvmf/common.sh@469 -- # nvmfpid=4108475 00:25:30.066 04:22:44 -- nvmf/common.sh@470 -- # waitforlisten 4108475 00:25:30.066 04:22:44 -- common/autotest_common.sh@819 -- # '[' -z 4108475 ']' 00:25:30.066 04:22:44 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:30.066 04:22:44 -- common/autotest_common.sh@824 -- # local max_retries=100 00:25:30.066 04:22:44 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:30.066 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:30.066 04:22:44 -- common/autotest_common.sh@828 -- # xtrace_disable 00:25:30.066 04:22:44 -- common/autotest_common.sh@10 -- # set +x 00:25:30.066 04:22:44 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:25:30.066 [2024-05-14 04:22:44.646596] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:25:30.066 [2024-05-14 04:22:44.646716] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:30.325 EAL: No free 2048 kB hugepages reported on node 1 00:25:30.325 [2024-05-14 04:22:44.775386] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:30.325 [2024-05-14 04:22:44.872607] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:25:30.325 [2024-05-14 04:22:44.872782] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:30.325 [2024-05-14 04:22:44.872796] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:30.325 [2024-05-14 04:22:44.872805] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:30.325 [2024-05-14 04:22:44.872837] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:25:30.584 [2024-05-14 04:22:45.155147] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:30.843 [2024-05-14 04:22:45.197363] tcp.c: 912:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:25:30.843 [2024-05-14 04:22:45.197615] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:30.843 04:22:45 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:25:30.843 04:22:45 -- common/autotest_common.sh@852 -- # return 0 00:25:30.843 04:22:45 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:25:30.843 04:22:45 -- common/autotest_common.sh@718 -- # xtrace_disable 00:25:30.843 04:22:45 -- common/autotest_common.sh@10 -- # set +x 00:25:30.843 04:22:45 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:30.843 04:22:45 -- target/tls.sh@216 -- # bdevperf_pid=4108744 00:25:30.843 04:22:45 -- target/tls.sh@217 -- # waitforlisten 4108744 /var/tmp/bdevperf.sock 00:25:30.843 04:22:45 -- common/autotest_common.sh@819 -- # '[' -z 4108744 ']' 00:25:30.843 04:22:45 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:25:30.843 04:22:45 -- common/autotest_common.sh@824 -- # local max_retries=100 00:25:30.843 04:22:45 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:25:30.843 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:25:30.843 04:22:45 -- common/autotest_common.sh@828 -- # xtrace_disable 00:25:30.843 04:22:45 -- common/autotest_common.sh@10 -- # set +x 00:25:30.843 04:22:45 -- target/tls.sh@213 -- # echo '{ 00:25:30.843 "subsystems": [ 00:25:30.843 { 00:25:30.843 "subsystem": "iobuf", 00:25:30.843 "config": [ 00:25:30.843 { 00:25:30.843 "method": "iobuf_set_options", 00:25:30.843 "params": { 00:25:30.843 "small_pool_count": 8192, 00:25:30.843 "large_pool_count": 1024, 00:25:30.843 "small_bufsize": 8192, 00:25:30.843 "large_bufsize": 135168 00:25:30.843 } 00:25:30.843 } 00:25:30.843 ] 00:25:30.843 }, 00:25:30.843 { 00:25:30.843 "subsystem": "sock", 00:25:30.843 "config": [ 00:25:30.843 { 00:25:30.843 "method": "sock_impl_set_options", 00:25:30.843 "params": { 00:25:30.843 "impl_name": "posix", 00:25:30.843 "recv_buf_size": 2097152, 00:25:30.843 "send_buf_size": 2097152, 00:25:30.843 "enable_recv_pipe": true, 00:25:30.843 "enable_quickack": false, 00:25:30.843 "enable_placement_id": 0, 00:25:30.843 "enable_zerocopy_send_server": true, 00:25:30.843 "enable_zerocopy_send_client": false, 00:25:30.843 "zerocopy_threshold": 0, 00:25:30.843 "tls_version": 0, 00:25:30.843 "enable_ktls": false 00:25:30.843 } 00:25:30.843 }, 00:25:30.843 { 00:25:30.843 "method": "sock_impl_set_options", 00:25:30.843 "params": { 00:25:30.843 "impl_name": "ssl", 00:25:30.843 "recv_buf_size": 4096, 00:25:30.843 "send_buf_size": 4096, 00:25:30.843 "enable_recv_pipe": true, 00:25:30.843 "enable_quickack": false, 00:25:30.843 "enable_placement_id": 0, 00:25:30.843 "enable_zerocopy_send_server": true, 00:25:30.843 "enable_zerocopy_send_client": false, 00:25:30.843 "zerocopy_threshold": 0, 00:25:30.843 "tls_version": 0, 00:25:30.843 "enable_ktls": false 00:25:30.843 } 00:25:30.843 } 00:25:30.843 ] 00:25:30.843 }, 00:25:30.843 { 00:25:30.843 "subsystem": "vmd", 00:25:30.843 "config": [] 00:25:30.843 }, 00:25:30.843 { 00:25:30.843 "subsystem": "accel", 00:25:30.843 "config": [ 00:25:30.843 { 00:25:30.843 "method": "accel_set_options", 00:25:30.843 "params": { 00:25:30.843 "small_cache_size": 128, 00:25:30.843 "large_cache_size": 16, 00:25:30.843 "task_count": 2048, 00:25:30.843 "sequence_count": 2048, 00:25:30.843 "buf_count": 2048 00:25:30.843 } 00:25:30.843 } 00:25:30.843 ] 00:25:30.843 }, 00:25:30.843 { 00:25:30.843 "subsystem": "bdev", 00:25:30.843 "config": [ 00:25:30.843 { 00:25:30.843 "method": "bdev_set_options", 00:25:30.843 "params": { 00:25:30.843 "bdev_io_pool_size": 65535, 00:25:30.843 "bdev_io_cache_size": 256, 00:25:30.843 "bdev_auto_examine": true, 00:25:30.843 "iobuf_small_cache_size": 128, 00:25:30.843 "iobuf_large_cache_size": 16 00:25:30.843 } 00:25:30.843 }, 00:25:30.843 { 00:25:30.843 "method": "bdev_raid_set_options", 00:25:30.843 "params": { 00:25:30.843 "process_window_size_kb": 1024 00:25:30.843 } 00:25:30.843 }, 00:25:30.843 { 00:25:30.843 "method": "bdev_iscsi_set_options", 00:25:30.843 "params": { 00:25:30.843 "timeout_sec": 30 00:25:30.843 } 00:25:30.843 }, 00:25:30.843 { 00:25:30.843 "method": "bdev_nvme_set_options", 00:25:30.843 "params": { 00:25:30.843 "action_on_timeout": "none", 00:25:30.843 "timeout_us": 0, 00:25:30.843 "timeout_admin_us": 0, 00:25:30.843 "keep_alive_timeout_ms": 10000, 00:25:30.843 "transport_retry_count": 4, 00:25:30.843 "arbitration_burst": 0, 00:25:30.843 "low_priority_weight": 0, 00:25:30.843 "medium_priority_weight": 0, 00:25:30.843 "high_priority_weight": 0, 00:25:30.843 "nvme_adminq_poll_period_us": 10000, 00:25:30.843 "nvme_ioq_poll_period_us": 0, 00:25:30.843 "io_queue_requests": 512, 00:25:30.843 "delay_cmd_submit": true, 00:25:30.843 "bdev_retry_count": 3, 00:25:30.843 "transport_ack_timeout": 0, 00:25:30.843 "ctrlr_loss_timeout_sec": 0, 00:25:30.843 "reconnect_delay_sec": 0, 00:25:30.843 "fast_io_fail_timeout_sec": 0, 00:25:30.843 "generate_uuids": false, 00:25:30.843 "transport_tos": 0, 00:25:30.843 "io_path_stat": false, 00:25:30.843 "allow_accel_sequence": false 00:25:30.844 } 00:25:30.844 }, 00:25:30.844 { 00:25:30.844 "method": "bdev_nvme_attach_controller", 00:25:30.844 "params": { 00:25:30.844 "name": "TLSTEST", 00:25:30.844 "trtype": "TCP", 00:25:30.844 "adrfam": "IPv4", 00:25:30.844 "traddr": "10.0.0.2", 00:25:30.844 "trsvcid": "4420", 00:25:30.844 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:25:30.844 "prchk_reftag": false, 00:25:30.844 "prchk_guard": false, 00:25:30.844 "ctrlr_loss_timeout_sec": 0, 00:25:30.844 "reconnect_delay_sec": 0, 00:25:30.844 "fast_io_fail_timeout_sec": 0, 00:25:30.844 "psk": "/var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/key_long.txt", 00:25:30.844 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:25:30.844 "hdgst": false, 00:25:30.844 "ddgst": false 00:25:30.844 } 00:25:30.844 }, 00:25:30.844 { 00:25:30.844 "method": "bdev_nvme_set_hotplug", 00:25:30.844 "params": { 00:25:30.844 "period_us": 100000, 00:25:30.844 "enable": false 00:25:30.844 } 00:25:30.844 }, 00:25:30.844 { 00:25:30.844 "method": "bdev_wait_for_examine" 00:25:30.844 } 00:25:30.844 ] 00:25:30.844 }, 00:25:30.844 { 00:25:30.844 "subsystem": "nbd", 00:25:30.844 "config": [] 00:25:30.844 } 00:25:30.844 ] 00:25:30.844 }' 00:25:30.844 04:22:45 -- target/tls.sh@213 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:25:31.102 [2024-05-14 04:22:45.432589] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:25:31.102 [2024-05-14 04:22:45.432695] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4108744 ] 00:25:31.102 EAL: No free 2048 kB hugepages reported on node 1 00:25:31.102 [2024-05-14 04:22:45.542383] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:31.102 [2024-05-14 04:22:45.637339] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:25:31.361 [2024-05-14 04:22:45.844252] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:25:31.619 04:22:46 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:25:31.619 04:22:46 -- common/autotest_common.sh@852 -- # return 0 00:25:31.619 04:22:46 -- target/tls.sh@220 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:25:31.619 Running I/O for 10 seconds... 00:25:41.659 00:25:41.659 Latency(us) 00:25:41.659 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:41.659 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:25:41.659 Verification LBA range: start 0x0 length 0x2000 00:25:41.659 TLSTESTn1 : 10.01 6302.66 24.62 0.00 0.00 20289.47 3621.73 41391.16 00:25:41.659 =================================================================================================================== 00:25:41.659 Total : 6302.66 24.62 0.00 0.00 20289.47 3621.73 41391.16 00:25:41.659 0 00:25:41.659 04:22:56 -- target/tls.sh@222 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:25:41.659 04:22:56 -- target/tls.sh@223 -- # killprocess 4108744 00:25:41.659 04:22:56 -- common/autotest_common.sh@926 -- # '[' -z 4108744 ']' 00:25:41.659 04:22:56 -- common/autotest_common.sh@930 -- # kill -0 4108744 00:25:41.659 04:22:56 -- common/autotest_common.sh@931 -- # uname 00:25:41.659 04:22:56 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:25:41.659 04:22:56 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 4108744 00:25:41.918 04:22:56 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:25:41.918 04:22:56 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:25:41.918 04:22:56 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 4108744' 00:25:41.918 killing process with pid 4108744 00:25:41.918 04:22:56 -- common/autotest_common.sh@945 -- # kill 4108744 00:25:41.918 Received shutdown signal, test time was about 10.000000 seconds 00:25:41.918 00:25:41.918 Latency(us) 00:25:41.918 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:41.918 =================================================================================================================== 00:25:41.918 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:41.918 04:22:56 -- common/autotest_common.sh@950 -- # wait 4108744 00:25:42.177 04:22:56 -- target/tls.sh@224 -- # killprocess 4108475 00:25:42.177 04:22:56 -- common/autotest_common.sh@926 -- # '[' -z 4108475 ']' 00:25:42.177 04:22:56 -- common/autotest_common.sh@930 -- # kill -0 4108475 00:25:42.177 04:22:56 -- common/autotest_common.sh@931 -- # uname 00:25:42.177 04:22:56 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:25:42.177 04:22:56 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 4108475 00:25:42.177 04:22:56 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:25:42.177 04:22:56 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:25:42.177 04:22:56 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 4108475' 00:25:42.177 killing process with pid 4108475 00:25:42.177 04:22:56 -- common/autotest_common.sh@945 -- # kill 4108475 00:25:42.177 04:22:56 -- common/autotest_common.sh@950 -- # wait 4108475 00:25:42.744 04:22:57 -- target/tls.sh@226 -- # trap - SIGINT SIGTERM EXIT 00:25:42.744 04:22:57 -- target/tls.sh@227 -- # cleanup 00:25:42.744 04:22:57 -- target/tls.sh@15 -- # process_shm --id 0 00:25:42.744 04:22:57 -- common/autotest_common.sh@796 -- # type=--id 00:25:42.744 04:22:57 -- common/autotest_common.sh@797 -- # id=0 00:25:42.744 04:22:57 -- common/autotest_common.sh@798 -- # '[' --id = --pid ']' 00:25:42.744 04:22:57 -- common/autotest_common.sh@802 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:25:42.744 04:22:57 -- common/autotest_common.sh@802 -- # shm_files=nvmf_trace.0 00:25:42.745 04:22:57 -- common/autotest_common.sh@804 -- # [[ -z nvmf_trace.0 ]] 00:25:42.745 04:22:57 -- common/autotest_common.sh@808 -- # for n in $shm_files 00:25:42.745 04:22:57 -- common/autotest_common.sh@809 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/dsa-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:25:42.745 nvmf_trace.0 00:25:42.745 04:22:57 -- common/autotest_common.sh@811 -- # return 0 00:25:42.745 04:22:57 -- target/tls.sh@16 -- # killprocess 4108744 00:25:42.745 04:22:57 -- common/autotest_common.sh@926 -- # '[' -z 4108744 ']' 00:25:42.745 04:22:57 -- common/autotest_common.sh@930 -- # kill -0 4108744 00:25:42.745 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/common/autotest_common.sh: line 930: kill: (4108744) - No such process 00:25:42.745 04:22:57 -- common/autotest_common.sh@953 -- # echo 'Process with pid 4108744 is not found' 00:25:42.745 Process with pid 4108744 is not found 00:25:42.745 04:22:57 -- target/tls.sh@17 -- # nvmftestfini 00:25:42.745 04:22:57 -- nvmf/common.sh@476 -- # nvmfcleanup 00:25:42.745 04:22:57 -- nvmf/common.sh@116 -- # sync 00:25:42.745 04:22:57 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:25:42.745 04:22:57 -- nvmf/common.sh@119 -- # set +e 00:25:42.745 04:22:57 -- nvmf/common.sh@120 -- # for i in {1..20} 00:25:42.745 04:22:57 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:25:42.745 rmmod nvme_tcp 00:25:42.745 rmmod nvme_fabrics 00:25:42.745 rmmod nvme_keyring 00:25:42.745 04:22:57 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:25:42.745 04:22:57 -- nvmf/common.sh@123 -- # set -e 00:25:42.745 04:22:57 -- nvmf/common.sh@124 -- # return 0 00:25:42.745 04:22:57 -- nvmf/common.sh@477 -- # '[' -n 4108475 ']' 00:25:42.745 04:22:57 -- nvmf/common.sh@478 -- # killprocess 4108475 00:25:42.745 04:22:57 -- common/autotest_common.sh@926 -- # '[' -z 4108475 ']' 00:25:42.745 04:22:57 -- common/autotest_common.sh@930 -- # kill -0 4108475 00:25:42.745 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/common/autotest_common.sh: line 930: kill: (4108475) - No such process 00:25:42.745 04:22:57 -- common/autotest_common.sh@953 -- # echo 'Process with pid 4108475 is not found' 00:25:42.745 Process with pid 4108475 is not found 00:25:42.745 04:22:57 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:25:42.745 04:22:57 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:25:42.745 04:22:57 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:25:42.745 04:22:57 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:25:42.745 04:22:57 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:25:42.745 04:22:57 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:42.745 04:22:57 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:42.745 04:22:57 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:45.278 04:22:59 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:25:45.278 04:22:59 -- target/tls.sh@18 -- # rm -f /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/key1.txt /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/key2.txt /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:25:45.278 00:25:45.278 real 1m12.836s 00:25:45.278 user 1m52.942s 00:25:45.278 sys 0m20.075s 00:25:45.278 04:22:59 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:45.278 04:22:59 -- common/autotest_common.sh@10 -- # set +x 00:25:45.278 ************************************ 00:25:45.278 END TEST nvmf_tls 00:25:45.278 ************************************ 00:25:45.278 04:22:59 -- nvmf/nvmf.sh@60 -- # run_test nvmf_fips /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:25:45.278 04:22:59 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:25:45.278 04:22:59 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:25:45.278 04:22:59 -- common/autotest_common.sh@10 -- # set +x 00:25:45.278 ************************************ 00:25:45.278 START TEST nvmf_fips 00:25:45.278 ************************************ 00:25:45.278 04:22:59 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:25:45.278 * Looking for test storage... 00:25:45.278 * Found test storage at /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/fips 00:25:45.278 04:22:59 -- fips/fips.sh@11 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/common.sh 00:25:45.278 04:22:59 -- nvmf/common.sh@7 -- # uname -s 00:25:45.278 04:22:59 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:45.278 04:22:59 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:45.278 04:22:59 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:45.278 04:22:59 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:45.278 04:22:59 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:45.278 04:22:59 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:45.278 04:22:59 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:45.278 04:22:59 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:45.278 04:22:59 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:45.278 04:22:59 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:45.278 04:22:59 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80ef6226-405e-ee11-906e-a4bf01973fda 00:25:45.278 04:22:59 -- nvmf/common.sh@18 -- # NVME_HOSTID=80ef6226-405e-ee11-906e-a4bf01973fda 00:25:45.278 04:22:59 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:45.278 04:22:59 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:45.278 04:22:59 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:25:45.278 04:22:59 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/common.sh 00:25:45.278 04:22:59 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:45.278 04:22:59 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:45.279 04:22:59 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:45.279 04:22:59 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:45.279 04:22:59 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:45.279 04:22:59 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:45.279 04:22:59 -- paths/export.sh@5 -- # export PATH 00:25:45.279 04:22:59 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:45.279 04:22:59 -- nvmf/common.sh@46 -- # : 0 00:25:45.279 04:22:59 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:25:45.279 04:22:59 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:25:45.279 04:22:59 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:25:45.279 04:22:59 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:45.279 04:22:59 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:45.279 04:22:59 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:25:45.279 04:22:59 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:25:45.279 04:22:59 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:25:45.279 04:22:59 -- fips/fips.sh@12 -- # rpc_py=/var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py 00:25:45.279 04:22:59 -- fips/fips.sh@89 -- # check_openssl_version 00:25:45.279 04:22:59 -- fips/fips.sh@83 -- # local target=3.0.0 00:25:45.279 04:22:59 -- fips/fips.sh@85 -- # openssl version 00:25:45.279 04:22:59 -- fips/fips.sh@85 -- # awk '{print $2}' 00:25:45.279 04:22:59 -- fips/fips.sh@85 -- # ge 3.0.9 3.0.0 00:25:45.279 04:22:59 -- scripts/common.sh@375 -- # cmp_versions 3.0.9 '>=' 3.0.0 00:25:45.279 04:22:59 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:25:45.279 04:22:59 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:25:45.279 04:22:59 -- scripts/common.sh@335 -- # IFS=.-: 00:25:45.279 04:22:59 -- scripts/common.sh@335 -- # read -ra ver1 00:25:45.279 04:22:59 -- scripts/common.sh@336 -- # IFS=.-: 00:25:45.279 04:22:59 -- scripts/common.sh@336 -- # read -ra ver2 00:25:45.279 04:22:59 -- scripts/common.sh@337 -- # local 'op=>=' 00:25:45.279 04:22:59 -- scripts/common.sh@339 -- # ver1_l=3 00:25:45.279 04:22:59 -- scripts/common.sh@340 -- # ver2_l=3 00:25:45.279 04:22:59 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:25:45.279 04:22:59 -- scripts/common.sh@343 -- # case "$op" in 00:25:45.279 04:22:59 -- scripts/common.sh@347 -- # : 1 00:25:45.279 04:22:59 -- scripts/common.sh@363 -- # (( v = 0 )) 00:25:45.279 04:22:59 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:45.279 04:22:59 -- scripts/common.sh@364 -- # decimal 3 00:25:45.279 04:22:59 -- scripts/common.sh@352 -- # local d=3 00:25:45.279 04:22:59 -- scripts/common.sh@353 -- # [[ 3 =~ ^[0-9]+$ ]] 00:25:45.279 04:22:59 -- scripts/common.sh@354 -- # echo 3 00:25:45.279 04:22:59 -- scripts/common.sh@364 -- # ver1[v]=3 00:25:45.279 04:22:59 -- scripts/common.sh@365 -- # decimal 3 00:25:45.279 04:22:59 -- scripts/common.sh@352 -- # local d=3 00:25:45.279 04:22:59 -- scripts/common.sh@353 -- # [[ 3 =~ ^[0-9]+$ ]] 00:25:45.279 04:22:59 -- scripts/common.sh@354 -- # echo 3 00:25:45.279 04:22:59 -- scripts/common.sh@365 -- # ver2[v]=3 00:25:45.279 04:22:59 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:25:45.279 04:22:59 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:25:45.279 04:22:59 -- scripts/common.sh@363 -- # (( v++ )) 00:25:45.279 04:22:59 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:45.279 04:22:59 -- scripts/common.sh@364 -- # decimal 0 00:25:45.279 04:22:59 -- scripts/common.sh@352 -- # local d=0 00:25:45.279 04:22:59 -- scripts/common.sh@353 -- # [[ 0 =~ ^[0-9]+$ ]] 00:25:45.279 04:22:59 -- scripts/common.sh@354 -- # echo 0 00:25:45.279 04:22:59 -- scripts/common.sh@364 -- # ver1[v]=0 00:25:45.279 04:22:59 -- scripts/common.sh@365 -- # decimal 0 00:25:45.279 04:22:59 -- scripts/common.sh@352 -- # local d=0 00:25:45.279 04:22:59 -- scripts/common.sh@353 -- # [[ 0 =~ ^[0-9]+$ ]] 00:25:45.279 04:22:59 -- scripts/common.sh@354 -- # echo 0 00:25:45.279 04:22:59 -- scripts/common.sh@365 -- # ver2[v]=0 00:25:45.279 04:22:59 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:25:45.279 04:22:59 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:25:45.279 04:22:59 -- scripts/common.sh@363 -- # (( v++ )) 00:25:45.279 04:22:59 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:45.279 04:22:59 -- scripts/common.sh@364 -- # decimal 9 00:25:45.279 04:22:59 -- scripts/common.sh@352 -- # local d=9 00:25:45.279 04:22:59 -- scripts/common.sh@353 -- # [[ 9 =~ ^[0-9]+$ ]] 00:25:45.279 04:22:59 -- scripts/common.sh@354 -- # echo 9 00:25:45.279 04:22:59 -- scripts/common.sh@364 -- # ver1[v]=9 00:25:45.279 04:22:59 -- scripts/common.sh@365 -- # decimal 0 00:25:45.279 04:22:59 -- scripts/common.sh@352 -- # local d=0 00:25:45.279 04:22:59 -- scripts/common.sh@353 -- # [[ 0 =~ ^[0-9]+$ ]] 00:25:45.279 04:22:59 -- scripts/common.sh@354 -- # echo 0 00:25:45.279 04:22:59 -- scripts/common.sh@365 -- # ver2[v]=0 00:25:45.279 04:22:59 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:25:45.279 04:22:59 -- scripts/common.sh@366 -- # return 0 00:25:45.279 04:22:59 -- fips/fips.sh@95 -- # openssl info -modulesdir 00:25:45.279 04:22:59 -- fips/fips.sh@95 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:25:45.279 04:22:59 -- fips/fips.sh@100 -- # openssl fipsinstall -help 00:25:45.279 04:22:59 -- fips/fips.sh@100 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:25:45.279 04:22:59 -- fips/fips.sh@101 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:25:45.279 04:22:59 -- fips/fips.sh@104 -- # export callback=build_openssl_config 00:25:45.279 04:22:59 -- fips/fips.sh@104 -- # callback=build_openssl_config 00:25:45.279 04:22:59 -- fips/fips.sh@105 -- # export OPENSSL_FORCE_FIPS_MODE=build_openssl_config 00:25:45.279 04:22:59 -- fips/fips.sh@105 -- # OPENSSL_FORCE_FIPS_MODE=build_openssl_config 00:25:45.279 04:22:59 -- fips/fips.sh@114 -- # build_openssl_config 00:25:45.279 04:22:59 -- fips/fips.sh@37 -- # cat 00:25:45.279 04:22:59 -- fips/fips.sh@57 -- # [[ ! -t 0 ]] 00:25:45.279 04:22:59 -- fips/fips.sh@58 -- # cat - 00:25:45.279 04:22:59 -- fips/fips.sh@115 -- # export OPENSSL_CONF=spdk_fips.conf 00:25:45.279 04:22:59 -- fips/fips.sh@115 -- # OPENSSL_CONF=spdk_fips.conf 00:25:45.279 04:22:59 -- fips/fips.sh@117 -- # mapfile -t providers 00:25:45.279 04:22:59 -- fips/fips.sh@117 -- # OPENSSL_CONF=spdk_fips.conf 00:25:45.279 04:22:59 -- fips/fips.sh@117 -- # openssl list -providers 00:25:45.279 04:22:59 -- fips/fips.sh@117 -- # grep name 00:25:45.279 04:22:59 -- fips/fips.sh@121 -- # (( 2 != 2 )) 00:25:45.279 04:22:59 -- fips/fips.sh@121 -- # [[ name: openssl base provider != *base* ]] 00:25:45.279 04:22:59 -- fips/fips.sh@121 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:25:45.279 04:22:59 -- fips/fips.sh@128 -- # NOT openssl md5 /dev/fd/62 00:25:45.279 04:22:59 -- common/autotest_common.sh@640 -- # local es=0 00:25:45.279 04:22:59 -- common/autotest_common.sh@642 -- # valid_exec_arg openssl md5 /dev/fd/62 00:25:45.279 04:22:59 -- common/autotest_common.sh@628 -- # local arg=openssl 00:25:45.279 04:22:59 -- fips/fips.sh@128 -- # : 00:25:45.279 04:22:59 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:25:45.279 04:22:59 -- common/autotest_common.sh@632 -- # type -t openssl 00:25:45.279 04:22:59 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:25:45.279 04:22:59 -- common/autotest_common.sh@634 -- # type -P openssl 00:25:45.279 04:22:59 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:25:45.279 04:22:59 -- common/autotest_common.sh@634 -- # arg=/usr/bin/openssl 00:25:45.279 04:22:59 -- common/autotest_common.sh@634 -- # [[ -x /usr/bin/openssl ]] 00:25:45.279 04:22:59 -- common/autotest_common.sh@643 -- # openssl md5 /dev/fd/62 00:25:45.279 Error setting digest 00:25:45.279 00F22D18207F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:373:Global default library context, Algorithm (MD5 : 97), Properties () 00:25:45.279 00F22D18207F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:254: 00:25:45.279 04:22:59 -- common/autotest_common.sh@643 -- # es=1 00:25:45.279 04:22:59 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:25:45.279 04:22:59 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:25:45.279 04:22:59 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:25:45.279 04:22:59 -- fips/fips.sh@131 -- # nvmftestinit 00:25:45.279 04:22:59 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:25:45.279 04:22:59 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:45.279 04:22:59 -- nvmf/common.sh@436 -- # prepare_net_devs 00:25:45.279 04:22:59 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:25:45.279 04:22:59 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:25:45.279 04:22:59 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:45.279 04:22:59 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:45.279 04:22:59 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:45.279 04:22:59 -- nvmf/common.sh@402 -- # [[ phy-fallback != virt ]] 00:25:45.279 04:22:59 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:25:45.279 04:22:59 -- nvmf/common.sh@284 -- # xtrace_disable 00:25:45.279 04:22:59 -- common/autotest_common.sh@10 -- # set +x 00:25:50.546 04:23:04 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:25:50.546 04:23:04 -- nvmf/common.sh@290 -- # pci_devs=() 00:25:50.546 04:23:04 -- nvmf/common.sh@290 -- # local -a pci_devs 00:25:50.546 04:23:04 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:25:50.546 04:23:04 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:25:50.546 04:23:04 -- nvmf/common.sh@292 -- # pci_drivers=() 00:25:50.546 04:23:04 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:25:50.546 04:23:04 -- nvmf/common.sh@294 -- # net_devs=() 00:25:50.546 04:23:04 -- nvmf/common.sh@294 -- # local -ga net_devs 00:25:50.546 04:23:04 -- nvmf/common.sh@295 -- # e810=() 00:25:50.546 04:23:04 -- nvmf/common.sh@295 -- # local -ga e810 00:25:50.546 04:23:04 -- nvmf/common.sh@296 -- # x722=() 00:25:50.546 04:23:04 -- nvmf/common.sh@296 -- # local -ga x722 00:25:50.546 04:23:04 -- nvmf/common.sh@297 -- # mlx=() 00:25:50.546 04:23:04 -- nvmf/common.sh@297 -- # local -ga mlx 00:25:50.546 04:23:04 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:50.546 04:23:04 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:50.546 04:23:04 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:50.546 04:23:04 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:50.546 04:23:04 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:50.546 04:23:04 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:50.546 04:23:04 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:50.546 04:23:04 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:50.546 04:23:04 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:50.546 04:23:04 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:50.546 04:23:04 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:50.546 04:23:04 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:25:50.546 04:23:04 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:25:50.546 04:23:04 -- nvmf/common.sh@326 -- # [[ '' == mlx5 ]] 00:25:50.546 04:23:04 -- nvmf/common.sh@328 -- # [[ '' == e810 ]] 00:25:50.546 04:23:04 -- nvmf/common.sh@330 -- # [[ '' == x722 ]] 00:25:50.546 04:23:04 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:25:50.546 04:23:04 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:25:50.546 04:23:04 -- nvmf/common.sh@340 -- # echo 'Found 0000:27:00.0 (0x8086 - 0x159b)' 00:25:50.546 Found 0000:27:00.0 (0x8086 - 0x159b) 00:25:50.546 04:23:04 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:25:50.546 04:23:04 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:25:50.546 04:23:04 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:50.546 04:23:04 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:50.546 04:23:04 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:25:50.546 04:23:04 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:25:50.546 04:23:04 -- nvmf/common.sh@340 -- # echo 'Found 0000:27:00.1 (0x8086 - 0x159b)' 00:25:50.546 Found 0000:27:00.1 (0x8086 - 0x159b) 00:25:50.546 04:23:04 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:25:50.546 04:23:04 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:25:50.546 04:23:04 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:50.546 04:23:04 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:50.546 04:23:04 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:25:50.546 04:23:04 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:25:50.546 04:23:04 -- nvmf/common.sh@371 -- # [[ '' == e810 ]] 00:25:50.546 04:23:04 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:25:50.546 04:23:04 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:50.546 04:23:04 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:25:50.546 04:23:04 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:50.546 04:23:04 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:27:00.0: cvl_0_0' 00:25:50.546 Found net devices under 0000:27:00.0: cvl_0_0 00:25:50.546 04:23:04 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:25:50.546 04:23:04 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:25:50.546 04:23:04 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:50.546 04:23:04 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:25:50.546 04:23:04 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:50.546 04:23:04 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:27:00.1: cvl_0_1' 00:25:50.546 Found net devices under 0000:27:00.1: cvl_0_1 00:25:50.546 04:23:04 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:25:50.546 04:23:04 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:25:50.546 04:23:04 -- nvmf/common.sh@402 -- # is_hw=yes 00:25:50.546 04:23:04 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:25:50.546 04:23:04 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:25:50.546 04:23:04 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:25:50.546 04:23:04 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:50.546 04:23:04 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:50.546 04:23:04 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:50.546 04:23:04 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:25:50.546 04:23:04 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:50.546 04:23:04 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:50.546 04:23:04 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:25:50.546 04:23:04 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:50.546 04:23:04 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:50.546 04:23:04 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:25:50.546 04:23:04 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:25:50.546 04:23:04 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:25:50.546 04:23:04 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:50.546 04:23:04 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:50.546 04:23:04 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:50.546 04:23:04 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:25:50.546 04:23:04 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:50.546 04:23:04 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:50.546 04:23:04 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:50.546 04:23:04 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:25:50.546 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:50.546 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.321 ms 00:25:50.546 00:25:50.546 --- 10.0.0.2 ping statistics --- 00:25:50.546 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:50.546 rtt min/avg/max/mdev = 0.321/0.321/0.321/0.000 ms 00:25:50.546 04:23:04 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:50.546 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:50.546 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.120 ms 00:25:50.546 00:25:50.546 --- 10.0.0.1 ping statistics --- 00:25:50.546 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:50.546 rtt min/avg/max/mdev = 0.120/0.120/0.120/0.000 ms 00:25:50.546 04:23:04 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:50.546 04:23:04 -- nvmf/common.sh@410 -- # return 0 00:25:50.546 04:23:04 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:25:50.546 04:23:04 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:50.546 04:23:04 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:25:50.546 04:23:04 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:25:50.546 04:23:04 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:50.546 04:23:04 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:25:50.546 04:23:04 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:25:50.546 04:23:04 -- fips/fips.sh@132 -- # nvmfappstart -m 0x2 00:25:50.546 04:23:04 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:25:50.546 04:23:04 -- common/autotest_common.sh@712 -- # xtrace_disable 00:25:50.546 04:23:04 -- common/autotest_common.sh@10 -- # set +x 00:25:50.546 04:23:04 -- nvmf/common.sh@469 -- # nvmfpid=4115004 00:25:50.546 04:23:04 -- nvmf/common.sh@470 -- # waitforlisten 4115004 00:25:50.546 04:23:04 -- common/autotest_common.sh@819 -- # '[' -z 4115004 ']' 00:25:50.546 04:23:04 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:25:50.546 04:23:04 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:50.546 04:23:04 -- common/autotest_common.sh@824 -- # local max_retries=100 00:25:50.546 04:23:04 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:50.546 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:50.546 04:23:04 -- common/autotest_common.sh@828 -- # xtrace_disable 00:25:50.546 04:23:04 -- common/autotest_common.sh@10 -- # set +x 00:25:50.546 [2024-05-14 04:23:05.076825] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:25:50.546 [2024-05-14 04:23:05.076937] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:50.805 EAL: No free 2048 kB hugepages reported on node 1 00:25:50.805 [2024-05-14 04:23:05.200506] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:50.805 [2024-05-14 04:23:05.295806] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:25:50.805 [2024-05-14 04:23:05.295970] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:50.805 [2024-05-14 04:23:05.295983] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:50.805 [2024-05-14 04:23:05.295992] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:50.805 [2024-05-14 04:23:05.296016] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:25:51.372 04:23:05 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:25:51.372 04:23:05 -- common/autotest_common.sh@852 -- # return 0 00:25:51.372 04:23:05 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:25:51.372 04:23:05 -- common/autotest_common.sh@718 -- # xtrace_disable 00:25:51.372 04:23:05 -- common/autotest_common.sh@10 -- # set +x 00:25:51.372 04:23:05 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:51.372 04:23:05 -- fips/fips.sh@134 -- # trap cleanup EXIT 00:25:51.372 04:23:05 -- fips/fips.sh@137 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:25:51.372 04:23:05 -- fips/fips.sh@138 -- # key_path=/var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/fips/key.txt 00:25:51.372 04:23:05 -- fips/fips.sh@139 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:25:51.372 04:23:05 -- fips/fips.sh@140 -- # chmod 0600 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/fips/key.txt 00:25:51.372 04:23:05 -- fips/fips.sh@142 -- # setup_nvmf_tgt_conf /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/fips/key.txt 00:25:51.372 04:23:05 -- fips/fips.sh@22 -- # local key=/var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/fips/key.txt 00:25:51.372 04:23:05 -- fips/fips.sh@24 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py 00:25:51.372 [2024-05-14 04:23:05.870957] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:51.372 [2024-05-14 04:23:05.886901] tcp.c: 912:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:25:51.372 [2024-05-14 04:23:05.887105] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:51.372 malloc0 00:25:51.372 04:23:05 -- fips/fips.sh@145 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:25:51.372 04:23:05 -- fips/fips.sh@148 -- # bdevperf_pid=4115114 00:25:51.372 04:23:05 -- fips/fips.sh@149 -- # waitforlisten 4115114 /var/tmp/bdevperf.sock 00:25:51.372 04:23:05 -- common/autotest_common.sh@819 -- # '[' -z 4115114 ']' 00:25:51.372 04:23:05 -- fips/fips.sh@146 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:25:51.372 04:23:05 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:25:51.372 04:23:05 -- common/autotest_common.sh@824 -- # local max_retries=100 00:25:51.372 04:23:05 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:25:51.372 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:25:51.372 04:23:05 -- common/autotest_common.sh@828 -- # xtrace_disable 00:25:51.372 04:23:05 -- common/autotest_common.sh@10 -- # set +x 00:25:51.630 [2024-05-14 04:23:06.052334] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:25:51.631 [2024-05-14 04:23:06.052444] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4115114 ] 00:25:51.631 EAL: No free 2048 kB hugepages reported on node 1 00:25:51.631 [2024-05-14 04:23:06.163177] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:51.889 [2024-05-14 04:23:06.258386] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:25:52.148 04:23:06 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:25:52.148 04:23:06 -- common/autotest_common.sh@852 -- # return 0 00:25:52.148 04:23:06 -- fips/fips.sh@151 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/fips/key.txt 00:25:52.407 [2024-05-14 04:23:06.861922] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:25:52.407 TLSTESTn1 00:25:52.407 04:23:06 -- fips/fips.sh@155 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:25:52.664 Running I/O for 10 seconds... 00:26:02.628 00:26:02.628 Latency(us) 00:26:02.628 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:02.628 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:26:02.628 Verification LBA range: start 0x0 length 0x2000 00:26:02.628 TLSTESTn1 : 10.01 6375.07 24.90 0.00 0.00 20058.95 3552.74 42494.92 00:26:02.628 =================================================================================================================== 00:26:02.628 Total : 6375.07 24.90 0.00 0.00 20058.95 3552.74 42494.92 00:26:02.628 0 00:26:02.628 04:23:17 -- fips/fips.sh@1 -- # cleanup 00:26:02.628 04:23:17 -- fips/fips.sh@15 -- # process_shm --id 0 00:26:02.628 04:23:17 -- common/autotest_common.sh@796 -- # type=--id 00:26:02.628 04:23:17 -- common/autotest_common.sh@797 -- # id=0 00:26:02.628 04:23:17 -- common/autotest_common.sh@798 -- # '[' --id = --pid ']' 00:26:02.628 04:23:17 -- common/autotest_common.sh@802 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:26:02.628 04:23:17 -- common/autotest_common.sh@802 -- # shm_files=nvmf_trace.0 00:26:02.628 04:23:17 -- common/autotest_common.sh@804 -- # [[ -z nvmf_trace.0 ]] 00:26:02.628 04:23:17 -- common/autotest_common.sh@808 -- # for n in $shm_files 00:26:02.628 04:23:17 -- common/autotest_common.sh@809 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/dsa-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:26:02.628 nvmf_trace.0 00:26:02.628 04:23:17 -- common/autotest_common.sh@811 -- # return 0 00:26:02.628 04:23:17 -- fips/fips.sh@16 -- # killprocess 4115114 00:26:02.628 04:23:17 -- common/autotest_common.sh@926 -- # '[' -z 4115114 ']' 00:26:02.628 04:23:17 -- common/autotest_common.sh@930 -- # kill -0 4115114 00:26:02.628 04:23:17 -- common/autotest_common.sh@931 -- # uname 00:26:02.628 04:23:17 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:26:02.628 04:23:17 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 4115114 00:26:02.628 04:23:17 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:26:02.628 04:23:17 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:26:02.628 04:23:17 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 4115114' 00:26:02.628 killing process with pid 4115114 00:26:02.628 04:23:17 -- common/autotest_common.sh@945 -- # kill 4115114 00:26:02.628 Received shutdown signal, test time was about 10.000000 seconds 00:26:02.628 00:26:02.628 Latency(us) 00:26:02.628 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:02.628 =================================================================================================================== 00:26:02.628 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:02.628 04:23:17 -- common/autotest_common.sh@950 -- # wait 4115114 00:26:03.195 04:23:17 -- fips/fips.sh@17 -- # nvmftestfini 00:26:03.195 04:23:17 -- nvmf/common.sh@476 -- # nvmfcleanup 00:26:03.195 04:23:17 -- nvmf/common.sh@116 -- # sync 00:26:03.195 04:23:17 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:26:03.195 04:23:17 -- nvmf/common.sh@119 -- # set +e 00:26:03.195 04:23:17 -- nvmf/common.sh@120 -- # for i in {1..20} 00:26:03.195 04:23:17 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:26:03.195 rmmod nvme_tcp 00:26:03.195 rmmod nvme_fabrics 00:26:03.195 rmmod nvme_keyring 00:26:03.195 04:23:17 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:26:03.195 04:23:17 -- nvmf/common.sh@123 -- # set -e 00:26:03.195 04:23:17 -- nvmf/common.sh@124 -- # return 0 00:26:03.195 04:23:17 -- nvmf/common.sh@477 -- # '[' -n 4115004 ']' 00:26:03.195 04:23:17 -- nvmf/common.sh@478 -- # killprocess 4115004 00:26:03.195 04:23:17 -- common/autotest_common.sh@926 -- # '[' -z 4115004 ']' 00:26:03.195 04:23:17 -- common/autotest_common.sh@930 -- # kill -0 4115004 00:26:03.195 04:23:17 -- common/autotest_common.sh@931 -- # uname 00:26:03.195 04:23:17 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:26:03.195 04:23:17 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 4115004 00:26:03.195 04:23:17 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:26:03.195 04:23:17 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:26:03.195 04:23:17 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 4115004' 00:26:03.195 killing process with pid 4115004 00:26:03.195 04:23:17 -- common/autotest_common.sh@945 -- # kill 4115004 00:26:03.195 04:23:17 -- common/autotest_common.sh@950 -- # wait 4115004 00:26:03.762 04:23:18 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:26:03.762 04:23:18 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:26:03.762 04:23:18 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:26:03.762 04:23:18 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:26:03.762 04:23:18 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:26:03.762 04:23:18 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:03.762 04:23:18 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:03.762 04:23:18 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:05.668 04:23:20 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:26:05.668 04:23:20 -- fips/fips.sh@18 -- # rm -f /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/fips/key.txt 00:26:06.003 00:26:06.003 real 0m20.855s 00:26:06.003 user 0m24.256s 00:26:06.003 sys 0m7.145s 00:26:06.003 04:23:20 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:06.003 04:23:20 -- common/autotest_common.sh@10 -- # set +x 00:26:06.003 ************************************ 00:26:06.003 END TEST nvmf_fips 00:26:06.003 ************************************ 00:26:06.003 04:23:20 -- nvmf/nvmf.sh@63 -- # '[' 1 -eq 1 ']' 00:26:06.003 04:23:20 -- nvmf/nvmf.sh@64 -- # run_test nvmf_fuzz /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/fabrics_fuzz.sh --transport=tcp 00:26:06.003 04:23:20 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:26:06.003 04:23:20 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:26:06.003 04:23:20 -- common/autotest_common.sh@10 -- # set +x 00:26:06.003 ************************************ 00:26:06.003 START TEST nvmf_fuzz 00:26:06.003 ************************************ 00:26:06.003 04:23:20 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/fabrics_fuzz.sh --transport=tcp 00:26:06.003 * Looking for test storage... 00:26:06.003 * Found test storage at /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target 00:26:06.003 04:23:20 -- target/fabrics_fuzz.sh@9 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/common.sh 00:26:06.003 04:23:20 -- nvmf/common.sh@7 -- # uname -s 00:26:06.003 04:23:20 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:06.003 04:23:20 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:06.003 04:23:20 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:06.003 04:23:20 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:06.003 04:23:20 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:06.003 04:23:20 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:06.003 04:23:20 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:06.003 04:23:20 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:06.003 04:23:20 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:06.003 04:23:20 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:06.003 04:23:20 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80ef6226-405e-ee11-906e-a4bf01973fda 00:26:06.003 04:23:20 -- nvmf/common.sh@18 -- # NVME_HOSTID=80ef6226-405e-ee11-906e-a4bf01973fda 00:26:06.003 04:23:20 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:06.003 04:23:20 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:06.003 04:23:20 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:26:06.003 04:23:20 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/common.sh 00:26:06.003 04:23:20 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:06.003 04:23:20 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:06.003 04:23:20 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:06.003 04:23:20 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:06.003 04:23:20 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:06.003 04:23:20 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:06.003 04:23:20 -- paths/export.sh@5 -- # export PATH 00:26:06.003 04:23:20 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:06.003 04:23:20 -- nvmf/common.sh@46 -- # : 0 00:26:06.003 04:23:20 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:26:06.003 04:23:20 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:26:06.003 04:23:20 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:26:06.003 04:23:20 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:06.003 04:23:20 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:06.003 04:23:20 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:26:06.003 04:23:20 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:26:06.003 04:23:20 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:26:06.003 04:23:20 -- target/fabrics_fuzz.sh@11 -- # nvmftestinit 00:26:06.003 04:23:20 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:26:06.003 04:23:20 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:06.003 04:23:20 -- nvmf/common.sh@436 -- # prepare_net_devs 00:26:06.003 04:23:20 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:26:06.003 04:23:20 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:26:06.003 04:23:20 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:06.003 04:23:20 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:06.003 04:23:20 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:06.003 04:23:20 -- nvmf/common.sh@402 -- # [[ phy-fallback != virt ]] 00:26:06.003 04:23:20 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:26:06.003 04:23:20 -- nvmf/common.sh@284 -- # xtrace_disable 00:26:06.003 04:23:20 -- common/autotest_common.sh@10 -- # set +x 00:26:12.581 04:23:25 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:26:12.581 04:23:25 -- nvmf/common.sh@290 -- # pci_devs=() 00:26:12.581 04:23:25 -- nvmf/common.sh@290 -- # local -a pci_devs 00:26:12.581 04:23:25 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:26:12.581 04:23:25 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:26:12.581 04:23:25 -- nvmf/common.sh@292 -- # pci_drivers=() 00:26:12.581 04:23:25 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:26:12.581 04:23:25 -- nvmf/common.sh@294 -- # net_devs=() 00:26:12.581 04:23:25 -- nvmf/common.sh@294 -- # local -ga net_devs 00:26:12.581 04:23:25 -- nvmf/common.sh@295 -- # e810=() 00:26:12.581 04:23:25 -- nvmf/common.sh@295 -- # local -ga e810 00:26:12.581 04:23:25 -- nvmf/common.sh@296 -- # x722=() 00:26:12.581 04:23:25 -- nvmf/common.sh@296 -- # local -ga x722 00:26:12.581 04:23:25 -- nvmf/common.sh@297 -- # mlx=() 00:26:12.581 04:23:25 -- nvmf/common.sh@297 -- # local -ga mlx 00:26:12.581 04:23:25 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:12.581 04:23:25 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:12.581 04:23:25 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:12.581 04:23:25 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:12.581 04:23:25 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:12.581 04:23:25 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:12.581 04:23:25 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:12.581 04:23:25 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:12.581 04:23:25 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:12.581 04:23:25 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:12.581 04:23:25 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:12.581 04:23:25 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:26:12.581 04:23:25 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:26:12.581 04:23:25 -- nvmf/common.sh@326 -- # [[ '' == mlx5 ]] 00:26:12.581 04:23:25 -- nvmf/common.sh@328 -- # [[ '' == e810 ]] 00:26:12.581 04:23:25 -- nvmf/common.sh@330 -- # [[ '' == x722 ]] 00:26:12.581 04:23:25 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:26:12.581 04:23:25 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:26:12.581 04:23:25 -- nvmf/common.sh@340 -- # echo 'Found 0000:27:00.0 (0x8086 - 0x159b)' 00:26:12.581 Found 0000:27:00.0 (0x8086 - 0x159b) 00:26:12.581 04:23:25 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:26:12.581 04:23:25 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:26:12.581 04:23:25 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:12.581 04:23:25 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:12.581 04:23:25 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:26:12.581 04:23:25 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:26:12.581 04:23:25 -- nvmf/common.sh@340 -- # echo 'Found 0000:27:00.1 (0x8086 - 0x159b)' 00:26:12.581 Found 0000:27:00.1 (0x8086 - 0x159b) 00:26:12.581 04:23:25 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:26:12.581 04:23:25 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:26:12.581 04:23:25 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:12.581 04:23:25 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:12.581 04:23:25 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:26:12.581 04:23:25 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:26:12.581 04:23:25 -- nvmf/common.sh@371 -- # [[ '' == e810 ]] 00:26:12.581 04:23:25 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:26:12.581 04:23:25 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:12.581 04:23:25 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:26:12.581 04:23:25 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:12.581 04:23:25 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:27:00.0: cvl_0_0' 00:26:12.581 Found net devices under 0000:27:00.0: cvl_0_0 00:26:12.581 04:23:25 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:26:12.581 04:23:25 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:26:12.581 04:23:25 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:12.581 04:23:25 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:26:12.581 04:23:25 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:12.581 04:23:25 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:27:00.1: cvl_0_1' 00:26:12.581 Found net devices under 0000:27:00.1: cvl_0_1 00:26:12.581 04:23:25 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:26:12.581 04:23:25 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:26:12.581 04:23:25 -- nvmf/common.sh@402 -- # is_hw=yes 00:26:12.581 04:23:25 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:26:12.581 04:23:25 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:26:12.581 04:23:25 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:26:12.581 04:23:25 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:12.581 04:23:25 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:12.581 04:23:25 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:12.581 04:23:25 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:26:12.581 04:23:25 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:12.581 04:23:25 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:12.581 04:23:25 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:26:12.581 04:23:25 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:12.581 04:23:25 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:12.581 04:23:25 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:26:12.581 04:23:25 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:26:12.581 04:23:25 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:26:12.581 04:23:25 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:12.581 04:23:26 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:12.581 04:23:26 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:12.581 04:23:26 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:26:12.581 04:23:26 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:12.581 04:23:26 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:12.581 04:23:26 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:12.581 04:23:26 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:26:12.581 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:12.581 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.344 ms 00:26:12.581 00:26:12.581 --- 10.0.0.2 ping statistics --- 00:26:12.581 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:12.581 rtt min/avg/max/mdev = 0.344/0.344/0.344/0.000 ms 00:26:12.581 04:23:26 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:12.581 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:12.581 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.199 ms 00:26:12.581 00:26:12.581 --- 10.0.0.1 ping statistics --- 00:26:12.581 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:12.581 rtt min/avg/max/mdev = 0.199/0.199/0.199/0.000 ms 00:26:12.581 04:23:26 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:12.581 04:23:26 -- nvmf/common.sh@410 -- # return 0 00:26:12.581 04:23:26 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:26:12.581 04:23:26 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:12.581 04:23:26 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:26:12.581 04:23:26 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:26:12.581 04:23:26 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:12.581 04:23:26 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:26:12.581 04:23:26 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:26:12.581 04:23:26 -- target/fabrics_fuzz.sh@13 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:26:12.581 04:23:26 -- target/fabrics_fuzz.sh@14 -- # nvmfpid=4121426 00:26:12.581 04:23:26 -- target/fabrics_fuzz.sh@16 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $nvmfpid; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:26:12.581 04:23:26 -- target/fabrics_fuzz.sh@18 -- # waitforlisten 4121426 00:26:12.581 04:23:26 -- common/autotest_common.sh@819 -- # '[' -z 4121426 ']' 00:26:12.581 04:23:26 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:12.581 04:23:26 -- common/autotest_common.sh@824 -- # local max_retries=100 00:26:12.581 04:23:26 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:12.581 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:12.581 04:23:26 -- common/autotest_common.sh@828 -- # xtrace_disable 00:26:12.581 04:23:26 -- common/autotest_common.sh@10 -- # set +x 00:26:12.581 04:23:27 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:26:12.581 04:23:27 -- common/autotest_common.sh@852 -- # return 0 00:26:12.581 04:23:27 -- target/fabrics_fuzz.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:26:12.581 04:23:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:12.581 04:23:27 -- common/autotest_common.sh@10 -- # set +x 00:26:12.582 04:23:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:12.582 04:23:27 -- target/fabrics_fuzz.sh@21 -- # rpc_cmd bdev_malloc_create -b Malloc0 64 512 00:26:12.582 04:23:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:12.582 04:23:27 -- common/autotest_common.sh@10 -- # set +x 00:26:12.582 Malloc0 00:26:12.582 04:23:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:12.582 04:23:27 -- target/fabrics_fuzz.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:26:12.582 04:23:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:12.582 04:23:27 -- common/autotest_common.sh@10 -- # set +x 00:26:12.582 04:23:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:12.582 04:23:27 -- target/fabrics_fuzz.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:26:12.582 04:23:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:12.582 04:23:27 -- common/autotest_common.sh@10 -- # set +x 00:26:12.582 04:23:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:12.582 04:23:27 -- target/fabrics_fuzz.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:12.582 04:23:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:12.582 04:23:27 -- common/autotest_common.sh@10 -- # set +x 00:26:12.582 04:23:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:12.582 04:23:27 -- target/fabrics_fuzz.sh@27 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' 00:26:12.582 04:23:27 -- target/fabrics_fuzz.sh@30 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -r /var/tmp/nvme_fuzz -t 30 -S 123456 -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' -N -a 00:26:44.683 Fuzzing completed. Shutting down the fuzz application 00:26:44.683 00:26:44.683 Dumping successful admin opcodes: 00:26:44.683 8, 9, 10, 24, 00:26:44.683 Dumping successful io opcodes: 00:26:44.683 0, 9, 00:26:44.683 NS: 0x200003aefec0 I/O qp, Total commands completed: 859542, total successful commands: 4994, random_seed: 1129057216 00:26:44.683 NS: 0x200003aefec0 admin qp, Total commands completed: 79536, total successful commands: 627, random_seed: 1886067264 00:26:44.683 04:23:57 -- target/fabrics_fuzz.sh@32 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -r /var/tmp/nvme_fuzz -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' -j /var/jenkins/workspace/dsa-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/example.json -a 00:26:44.683 Fuzzing completed. Shutting down the fuzz application 00:26:44.683 00:26:44.683 Dumping successful admin opcodes: 00:26:44.683 24, 00:26:44.683 Dumping successful io opcodes: 00:26:44.683 00:26:44.683 NS: 0x200003aefec0 I/O qp, Total commands completed: 0, total successful commands: 0, random_seed: 904798549 00:26:44.683 NS: 0x200003aefec0 admin qp, Total commands completed: 16, total successful commands: 4, random_seed: 904890573 00:26:44.683 04:23:59 -- target/fabrics_fuzz.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:26:44.683 04:23:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:44.683 04:23:59 -- common/autotest_common.sh@10 -- # set +x 00:26:44.683 04:23:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:44.683 04:23:59 -- target/fabrics_fuzz.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:26:44.683 04:23:59 -- target/fabrics_fuzz.sh@38 -- # nvmftestfini 00:26:44.683 04:23:59 -- nvmf/common.sh@476 -- # nvmfcleanup 00:26:44.683 04:23:59 -- nvmf/common.sh@116 -- # sync 00:26:44.683 04:23:59 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:26:44.683 04:23:59 -- nvmf/common.sh@119 -- # set +e 00:26:44.683 04:23:59 -- nvmf/common.sh@120 -- # for i in {1..20} 00:26:44.683 04:23:59 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:26:44.683 rmmod nvme_tcp 00:26:44.683 rmmod nvme_fabrics 00:26:44.683 rmmod nvme_keyring 00:26:44.683 04:23:59 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:26:44.683 04:23:59 -- nvmf/common.sh@123 -- # set -e 00:26:44.683 04:23:59 -- nvmf/common.sh@124 -- # return 0 00:26:44.683 04:23:59 -- nvmf/common.sh@477 -- # '[' -n 4121426 ']' 00:26:44.683 04:23:59 -- nvmf/common.sh@478 -- # killprocess 4121426 00:26:44.683 04:23:59 -- common/autotest_common.sh@926 -- # '[' -z 4121426 ']' 00:26:44.683 04:23:59 -- common/autotest_common.sh@930 -- # kill -0 4121426 00:26:44.683 04:23:59 -- common/autotest_common.sh@931 -- # uname 00:26:44.683 04:23:59 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:26:44.683 04:23:59 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 4121426 00:26:44.683 04:23:59 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:26:44.683 04:23:59 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:26:44.683 04:23:59 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 4121426' 00:26:44.683 killing process with pid 4121426 00:26:44.683 04:23:59 -- common/autotest_common.sh@945 -- # kill 4121426 00:26:44.683 04:23:59 -- common/autotest_common.sh@950 -- # wait 4121426 00:26:45.249 04:23:59 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:26:45.249 04:23:59 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:26:45.249 04:23:59 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:26:45.249 04:23:59 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:26:45.249 04:23:59 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:26:45.249 04:23:59 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:45.249 04:23:59 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:45.249 04:23:59 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:47.784 04:24:01 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:26:47.784 04:24:01 -- target/fabrics_fuzz.sh@39 -- # rm /var/jenkins/workspace/dsa-phy-autotest/spdk/../output/nvmf_fuzz_logs1.txt /var/jenkins/workspace/dsa-phy-autotest/spdk/../output/nvmf_fuzz_logs2.txt 00:26:47.784 00:26:47.784 real 0m41.560s 00:26:47.784 user 0m59.503s 00:26:47.784 sys 0m11.759s 00:26:47.784 04:24:01 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:47.784 04:24:01 -- common/autotest_common.sh@10 -- # set +x 00:26:47.784 ************************************ 00:26:47.784 END TEST nvmf_fuzz 00:26:47.784 ************************************ 00:26:47.784 04:24:01 -- nvmf/nvmf.sh@65 -- # run_test nvmf_multiconnection /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/multiconnection.sh --transport=tcp 00:26:47.784 04:24:01 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:26:47.784 04:24:01 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:26:47.784 04:24:01 -- common/autotest_common.sh@10 -- # set +x 00:26:47.784 ************************************ 00:26:47.784 START TEST nvmf_multiconnection 00:26:47.784 ************************************ 00:26:47.785 04:24:01 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/multiconnection.sh --transport=tcp 00:26:47.785 * Looking for test storage... 00:26:47.785 * Found test storage at /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target 00:26:47.785 04:24:01 -- target/multiconnection.sh@9 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/common.sh 00:26:47.785 04:24:01 -- nvmf/common.sh@7 -- # uname -s 00:26:47.785 04:24:01 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:47.785 04:24:01 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:47.785 04:24:01 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:47.785 04:24:01 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:47.785 04:24:01 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:47.785 04:24:01 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:47.785 04:24:01 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:47.785 04:24:01 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:47.785 04:24:01 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:47.785 04:24:01 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:47.785 04:24:01 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80ef6226-405e-ee11-906e-a4bf01973fda 00:26:47.785 04:24:01 -- nvmf/common.sh@18 -- # NVME_HOSTID=80ef6226-405e-ee11-906e-a4bf01973fda 00:26:47.785 04:24:01 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:47.785 04:24:01 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:47.785 04:24:01 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:26:47.785 04:24:01 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/common.sh 00:26:47.785 04:24:01 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:47.785 04:24:01 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:47.785 04:24:01 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:47.785 04:24:01 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:47.785 04:24:01 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:47.785 04:24:01 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:47.785 04:24:01 -- paths/export.sh@5 -- # export PATH 00:26:47.785 04:24:01 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:47.785 04:24:01 -- nvmf/common.sh@46 -- # : 0 00:26:47.785 04:24:01 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:26:47.785 04:24:01 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:26:47.785 04:24:01 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:26:47.785 04:24:01 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:47.785 04:24:01 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:47.785 04:24:01 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:26:47.785 04:24:01 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:26:47.785 04:24:01 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:26:47.785 04:24:01 -- target/multiconnection.sh@11 -- # MALLOC_BDEV_SIZE=64 00:26:47.785 04:24:01 -- target/multiconnection.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:26:47.785 04:24:01 -- target/multiconnection.sh@14 -- # NVMF_SUBSYS=11 00:26:47.785 04:24:01 -- target/multiconnection.sh@16 -- # nvmftestinit 00:26:47.785 04:24:01 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:26:47.785 04:24:01 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:47.785 04:24:01 -- nvmf/common.sh@436 -- # prepare_net_devs 00:26:47.785 04:24:01 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:26:47.785 04:24:01 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:26:47.785 04:24:01 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:47.785 04:24:01 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:47.785 04:24:01 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:47.785 04:24:01 -- nvmf/common.sh@402 -- # [[ phy-fallback != virt ]] 00:26:47.785 04:24:01 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:26:47.785 04:24:01 -- nvmf/common.sh@284 -- # xtrace_disable 00:26:47.785 04:24:01 -- common/autotest_common.sh@10 -- # set +x 00:26:53.059 04:24:07 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:26:53.059 04:24:07 -- nvmf/common.sh@290 -- # pci_devs=() 00:26:53.059 04:24:07 -- nvmf/common.sh@290 -- # local -a pci_devs 00:26:53.059 04:24:07 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:26:53.059 04:24:07 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:26:53.059 04:24:07 -- nvmf/common.sh@292 -- # pci_drivers=() 00:26:53.059 04:24:07 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:26:53.059 04:24:07 -- nvmf/common.sh@294 -- # net_devs=() 00:26:53.059 04:24:07 -- nvmf/common.sh@294 -- # local -ga net_devs 00:26:53.059 04:24:07 -- nvmf/common.sh@295 -- # e810=() 00:26:53.059 04:24:07 -- nvmf/common.sh@295 -- # local -ga e810 00:26:53.059 04:24:07 -- nvmf/common.sh@296 -- # x722=() 00:26:53.059 04:24:07 -- nvmf/common.sh@296 -- # local -ga x722 00:26:53.059 04:24:07 -- nvmf/common.sh@297 -- # mlx=() 00:26:53.059 04:24:07 -- nvmf/common.sh@297 -- # local -ga mlx 00:26:53.059 04:24:07 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:53.059 04:24:07 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:53.059 04:24:07 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:53.059 04:24:07 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:53.059 04:24:07 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:53.059 04:24:07 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:53.059 04:24:07 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:53.059 04:24:07 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:53.059 04:24:07 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:53.059 04:24:07 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:53.059 04:24:07 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:53.059 04:24:07 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:26:53.059 04:24:07 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:26:53.059 04:24:07 -- nvmf/common.sh@326 -- # [[ '' == mlx5 ]] 00:26:53.059 04:24:07 -- nvmf/common.sh@328 -- # [[ '' == e810 ]] 00:26:53.059 04:24:07 -- nvmf/common.sh@330 -- # [[ '' == x722 ]] 00:26:53.059 04:24:07 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:26:53.059 04:24:07 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:26:53.059 04:24:07 -- nvmf/common.sh@340 -- # echo 'Found 0000:27:00.0 (0x8086 - 0x159b)' 00:26:53.059 Found 0000:27:00.0 (0x8086 - 0x159b) 00:26:53.059 04:24:07 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:26:53.059 04:24:07 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:26:53.059 04:24:07 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:53.059 04:24:07 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:53.059 04:24:07 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:26:53.059 04:24:07 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:26:53.059 04:24:07 -- nvmf/common.sh@340 -- # echo 'Found 0000:27:00.1 (0x8086 - 0x159b)' 00:26:53.059 Found 0000:27:00.1 (0x8086 - 0x159b) 00:26:53.059 04:24:07 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:26:53.059 04:24:07 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:26:53.059 04:24:07 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:53.059 04:24:07 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:53.059 04:24:07 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:26:53.059 04:24:07 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:26:53.059 04:24:07 -- nvmf/common.sh@371 -- # [[ '' == e810 ]] 00:26:53.059 04:24:07 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:26:53.059 04:24:07 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:53.059 04:24:07 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:26:53.059 04:24:07 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:53.059 04:24:07 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:27:00.0: cvl_0_0' 00:26:53.059 Found net devices under 0000:27:00.0: cvl_0_0 00:26:53.059 04:24:07 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:26:53.059 04:24:07 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:26:53.059 04:24:07 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:53.059 04:24:07 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:26:53.059 04:24:07 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:53.059 04:24:07 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:27:00.1: cvl_0_1' 00:26:53.059 Found net devices under 0000:27:00.1: cvl_0_1 00:26:53.059 04:24:07 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:26:53.059 04:24:07 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:26:53.059 04:24:07 -- nvmf/common.sh@402 -- # is_hw=yes 00:26:53.059 04:24:07 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:26:53.059 04:24:07 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:26:53.059 04:24:07 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:26:53.059 04:24:07 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:53.059 04:24:07 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:53.059 04:24:07 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:53.059 04:24:07 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:26:53.059 04:24:07 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:53.059 04:24:07 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:53.059 04:24:07 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:26:53.059 04:24:07 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:53.059 04:24:07 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:53.059 04:24:07 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:26:53.059 04:24:07 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:26:53.059 04:24:07 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:26:53.059 04:24:07 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:53.059 04:24:07 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:53.059 04:24:07 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:53.059 04:24:07 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:26:53.059 04:24:07 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:53.059 04:24:07 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:53.059 04:24:07 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:53.059 04:24:07 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:26:53.059 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:53.059 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.196 ms 00:26:53.059 00:26:53.059 --- 10.0.0.2 ping statistics --- 00:26:53.059 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:53.059 rtt min/avg/max/mdev = 0.196/0.196/0.196/0.000 ms 00:26:53.059 04:24:07 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:53.059 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:53.059 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.252 ms 00:26:53.059 00:26:53.059 --- 10.0.0.1 ping statistics --- 00:26:53.059 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:53.059 rtt min/avg/max/mdev = 0.252/0.252/0.252/0.000 ms 00:26:53.059 04:24:07 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:53.059 04:24:07 -- nvmf/common.sh@410 -- # return 0 00:26:53.059 04:24:07 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:26:53.059 04:24:07 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:53.059 04:24:07 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:26:53.059 04:24:07 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:26:53.059 04:24:07 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:53.059 04:24:07 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:26:53.059 04:24:07 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:26:53.059 04:24:07 -- target/multiconnection.sh@17 -- # nvmfappstart -m 0xF 00:26:53.059 04:24:07 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:26:53.059 04:24:07 -- common/autotest_common.sh@712 -- # xtrace_disable 00:26:53.059 04:24:07 -- common/autotest_common.sh@10 -- # set +x 00:26:53.059 04:24:07 -- nvmf/common.sh@469 -- # nvmfpid=4132297 00:26:53.059 04:24:07 -- nvmf/common.sh@470 -- # waitforlisten 4132297 00:26:53.059 04:24:07 -- common/autotest_common.sh@819 -- # '[' -z 4132297 ']' 00:26:53.059 04:24:07 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:53.059 04:24:07 -- common/autotest_common.sh@824 -- # local max_retries=100 00:26:53.059 04:24:07 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:53.059 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:53.059 04:24:07 -- common/autotest_common.sh@828 -- # xtrace_disable 00:26:53.059 04:24:07 -- common/autotest_common.sh@10 -- # set +x 00:26:53.059 04:24:07 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:26:53.059 [2024-05-14 04:24:07.517251] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:26:53.059 [2024-05-14 04:24:07.517363] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:53.059 EAL: No free 2048 kB hugepages reported on node 1 00:26:53.059 [2024-05-14 04:24:07.643407] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:26:53.318 [2024-05-14 04:24:07.737923] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:26:53.318 [2024-05-14 04:24:07.738088] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:53.318 [2024-05-14 04:24:07.738102] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:53.318 [2024-05-14 04:24:07.738110] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:53.318 [2024-05-14 04:24:07.738165] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:26:53.318 [2024-05-14 04:24:07.738273] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:26:53.318 [2024-05-14 04:24:07.738311] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:53.318 [2024-05-14 04:24:07.738322] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:26:53.884 04:24:08 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:26:53.884 04:24:08 -- common/autotest_common.sh@852 -- # return 0 00:26:53.884 04:24:08 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:26:53.884 04:24:08 -- common/autotest_common.sh@718 -- # xtrace_disable 00:26:53.884 04:24:08 -- common/autotest_common.sh@10 -- # set +x 00:26:53.884 04:24:08 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:53.884 04:24:08 -- target/multiconnection.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:26:53.884 04:24:08 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:53.884 04:24:08 -- common/autotest_common.sh@10 -- # set +x 00:26:53.884 [2024-05-14 04:24:08.247539] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:53.884 04:24:08 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:53.884 04:24:08 -- target/multiconnection.sh@21 -- # seq 1 11 00:26:53.884 04:24:08 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:53.884 04:24:08 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:26:53.884 04:24:08 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:53.884 04:24:08 -- common/autotest_common.sh@10 -- # set +x 00:26:53.884 Malloc1 00:26:53.884 04:24:08 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:53.884 04:24:08 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK1 00:26:53.884 04:24:08 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:53.884 04:24:08 -- common/autotest_common.sh@10 -- # set +x 00:26:53.884 04:24:08 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:53.884 04:24:08 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:26:53.884 04:24:08 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:53.884 04:24:08 -- common/autotest_common.sh@10 -- # set +x 00:26:53.884 04:24:08 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:53.884 04:24:08 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:53.884 04:24:08 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:53.884 04:24:08 -- common/autotest_common.sh@10 -- # set +x 00:26:53.884 [2024-05-14 04:24:08.319675] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:53.884 04:24:08 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:53.884 04:24:08 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:53.884 04:24:08 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc2 00:26:53.884 04:24:08 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:53.884 04:24:08 -- common/autotest_common.sh@10 -- # set +x 00:26:53.884 Malloc2 00:26:53.884 04:24:08 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:53.884 04:24:08 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:26:53.884 04:24:08 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:53.884 04:24:08 -- common/autotest_common.sh@10 -- # set +x 00:26:53.884 04:24:08 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:53.884 04:24:08 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc2 00:26:53.884 04:24:08 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:53.884 04:24:08 -- common/autotest_common.sh@10 -- # set +x 00:26:53.884 04:24:08 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:53.884 04:24:08 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:26:53.884 04:24:08 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:53.884 04:24:08 -- common/autotest_common.sh@10 -- # set +x 00:26:53.884 04:24:08 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:53.884 04:24:08 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:53.884 04:24:08 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc3 00:26:53.884 04:24:08 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:53.884 04:24:08 -- common/autotest_common.sh@10 -- # set +x 00:26:53.884 Malloc3 00:26:53.884 04:24:08 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:53.884 04:24:08 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK3 00:26:53.884 04:24:08 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:53.884 04:24:08 -- common/autotest_common.sh@10 -- # set +x 00:26:53.884 04:24:08 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:53.884 04:24:08 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Malloc3 00:26:53.884 04:24:08 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:53.884 04:24:08 -- common/autotest_common.sh@10 -- # set +x 00:26:53.884 04:24:08 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:53.884 04:24:08 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:26:53.884 04:24:08 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:53.884 04:24:08 -- common/autotest_common.sh@10 -- # set +x 00:26:53.884 04:24:08 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:53.884 04:24:08 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:53.885 04:24:08 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc4 00:26:53.885 04:24:08 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:53.885 04:24:08 -- common/autotest_common.sh@10 -- # set +x 00:26:53.885 Malloc4 00:26:53.885 04:24:08 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:54.144 04:24:08 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK4 00:26:54.145 04:24:08 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:54.145 04:24:08 -- common/autotest_common.sh@10 -- # set +x 00:26:54.145 04:24:08 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:54.145 04:24:08 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Malloc4 00:26:54.145 04:24:08 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:54.145 04:24:08 -- common/autotest_common.sh@10 -- # set +x 00:26:54.145 04:24:08 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:54.145 04:24:08 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:26:54.145 04:24:08 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:54.145 04:24:08 -- common/autotest_common.sh@10 -- # set +x 00:26:54.145 04:24:08 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:54.145 04:24:08 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:54.145 04:24:08 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc5 00:26:54.145 04:24:08 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:54.145 04:24:08 -- common/autotest_common.sh@10 -- # set +x 00:26:54.145 Malloc5 00:26:54.145 04:24:08 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:54.145 04:24:08 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode5 -a -s SPDK5 00:26:54.145 04:24:08 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:54.145 04:24:08 -- common/autotest_common.sh@10 -- # set +x 00:26:54.145 04:24:08 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:54.145 04:24:08 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode5 Malloc5 00:26:54.145 04:24:08 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:54.145 04:24:08 -- common/autotest_common.sh@10 -- # set +x 00:26:54.145 04:24:08 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:54.145 04:24:08 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode5 -t tcp -a 10.0.0.2 -s 4420 00:26:54.145 04:24:08 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:54.145 04:24:08 -- common/autotest_common.sh@10 -- # set +x 00:26:54.145 04:24:08 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:54.145 04:24:08 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:54.145 04:24:08 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc6 00:26:54.145 04:24:08 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:54.145 04:24:08 -- common/autotest_common.sh@10 -- # set +x 00:26:54.145 Malloc6 00:26:54.145 04:24:08 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:54.145 04:24:08 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode6 -a -s SPDK6 00:26:54.145 04:24:08 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:54.145 04:24:08 -- common/autotest_common.sh@10 -- # set +x 00:26:54.145 04:24:08 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:54.145 04:24:08 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode6 Malloc6 00:26:54.145 04:24:08 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:54.145 04:24:08 -- common/autotest_common.sh@10 -- # set +x 00:26:54.145 04:24:08 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:54.145 04:24:08 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode6 -t tcp -a 10.0.0.2 -s 4420 00:26:54.145 04:24:08 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:54.145 04:24:08 -- common/autotest_common.sh@10 -- # set +x 00:26:54.145 04:24:08 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:54.145 04:24:08 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:54.145 04:24:08 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc7 00:26:54.145 04:24:08 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:54.145 04:24:08 -- common/autotest_common.sh@10 -- # set +x 00:26:54.145 Malloc7 00:26:54.145 04:24:08 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:54.145 04:24:08 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode7 -a -s SPDK7 00:26:54.145 04:24:08 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:54.145 04:24:08 -- common/autotest_common.sh@10 -- # set +x 00:26:54.145 04:24:08 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:54.145 04:24:08 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode7 Malloc7 00:26:54.145 04:24:08 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:54.145 04:24:08 -- common/autotest_common.sh@10 -- # set +x 00:26:54.145 04:24:08 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:54.145 04:24:08 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode7 -t tcp -a 10.0.0.2 -s 4420 00:26:54.145 04:24:08 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:54.145 04:24:08 -- common/autotest_common.sh@10 -- # set +x 00:26:54.145 04:24:08 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:54.145 04:24:08 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:54.145 04:24:08 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc8 00:26:54.145 04:24:08 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:54.145 04:24:08 -- common/autotest_common.sh@10 -- # set +x 00:26:54.145 Malloc8 00:26:54.145 04:24:08 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:54.145 04:24:08 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode8 -a -s SPDK8 00:26:54.145 04:24:08 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:54.145 04:24:08 -- common/autotest_common.sh@10 -- # set +x 00:26:54.406 04:24:08 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:54.406 04:24:08 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode8 Malloc8 00:26:54.406 04:24:08 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:54.406 04:24:08 -- common/autotest_common.sh@10 -- # set +x 00:26:54.406 04:24:08 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:54.406 04:24:08 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode8 -t tcp -a 10.0.0.2 -s 4420 00:26:54.406 04:24:08 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:54.406 04:24:08 -- common/autotest_common.sh@10 -- # set +x 00:26:54.406 04:24:08 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:54.406 04:24:08 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:54.406 04:24:08 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc9 00:26:54.406 04:24:08 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:54.406 04:24:08 -- common/autotest_common.sh@10 -- # set +x 00:26:54.406 Malloc9 00:26:54.406 04:24:08 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:54.406 04:24:08 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode9 -a -s SPDK9 00:26:54.406 04:24:08 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:54.406 04:24:08 -- common/autotest_common.sh@10 -- # set +x 00:26:54.406 04:24:08 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:54.406 04:24:08 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode9 Malloc9 00:26:54.406 04:24:08 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:54.406 04:24:08 -- common/autotest_common.sh@10 -- # set +x 00:26:54.406 04:24:08 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:54.406 04:24:08 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode9 -t tcp -a 10.0.0.2 -s 4420 00:26:54.406 04:24:08 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:54.406 04:24:08 -- common/autotest_common.sh@10 -- # set +x 00:26:54.406 04:24:08 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:54.406 04:24:08 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:54.406 04:24:08 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc10 00:26:54.406 04:24:08 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:54.406 04:24:08 -- common/autotest_common.sh@10 -- # set +x 00:26:54.406 Malloc10 00:26:54.406 04:24:08 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:54.406 04:24:08 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode10 -a -s SPDK10 00:26:54.406 04:24:08 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:54.406 04:24:08 -- common/autotest_common.sh@10 -- # set +x 00:26:54.406 04:24:08 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:54.406 04:24:08 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode10 Malloc10 00:26:54.406 04:24:08 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:54.406 04:24:08 -- common/autotest_common.sh@10 -- # set +x 00:26:54.406 04:24:08 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:54.406 04:24:08 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode10 -t tcp -a 10.0.0.2 -s 4420 00:26:54.406 04:24:08 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:54.406 04:24:08 -- common/autotest_common.sh@10 -- # set +x 00:26:54.406 04:24:08 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:54.406 04:24:08 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:54.406 04:24:08 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc11 00:26:54.406 04:24:08 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:54.406 04:24:08 -- common/autotest_common.sh@10 -- # set +x 00:26:54.406 Malloc11 00:26:54.406 04:24:08 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:54.406 04:24:08 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode11 -a -s SPDK11 00:26:54.406 04:24:08 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:54.406 04:24:08 -- common/autotest_common.sh@10 -- # set +x 00:26:54.406 04:24:08 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:54.406 04:24:08 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode11 Malloc11 00:26:54.406 04:24:08 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:54.406 04:24:08 -- common/autotest_common.sh@10 -- # set +x 00:26:54.406 04:24:08 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:54.406 04:24:08 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode11 -t tcp -a 10.0.0.2 -s 4420 00:26:54.406 04:24:08 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:54.406 04:24:08 -- common/autotest_common.sh@10 -- # set +x 00:26:54.406 04:24:08 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:54.406 04:24:08 -- target/multiconnection.sh@28 -- # seq 1 11 00:26:54.406 04:24:08 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:54.406 04:24:08 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80ef6226-405e-ee11-906e-a4bf01973fda --hostid=80ef6226-405e-ee11-906e-a4bf01973fda -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:26:55.855 04:24:10 -- target/multiconnection.sh@30 -- # waitforserial SPDK1 00:26:55.855 04:24:10 -- common/autotest_common.sh@1177 -- # local i=0 00:26:55.855 04:24:10 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:26:55.855 04:24:10 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:26:55.855 04:24:10 -- common/autotest_common.sh@1184 -- # sleep 2 00:26:57.761 04:24:12 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:26:57.761 04:24:12 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:26:57.761 04:24:12 -- common/autotest_common.sh@1186 -- # grep -c SPDK1 00:26:57.761 04:24:12 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:26:57.761 04:24:12 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:26:57.761 04:24:12 -- common/autotest_common.sh@1187 -- # return 0 00:26:57.761 04:24:12 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:57.761 04:24:12 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80ef6226-405e-ee11-906e-a4bf01973fda --hostid=80ef6226-405e-ee11-906e-a4bf01973fda -t tcp -n nqn.2016-06.io.spdk:cnode2 -a 10.0.0.2 -s 4420 00:26:59.668 04:24:13 -- target/multiconnection.sh@30 -- # waitforserial SPDK2 00:26:59.668 04:24:13 -- common/autotest_common.sh@1177 -- # local i=0 00:26:59.668 04:24:13 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:26:59.668 04:24:13 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:26:59.668 04:24:13 -- common/autotest_common.sh@1184 -- # sleep 2 00:27:01.573 04:24:15 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:27:01.573 04:24:15 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:27:01.573 04:24:15 -- common/autotest_common.sh@1186 -- # grep -c SPDK2 00:27:01.573 04:24:15 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:27:01.573 04:24:15 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:27:01.573 04:24:15 -- common/autotest_common.sh@1187 -- # return 0 00:27:01.573 04:24:15 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:27:01.573 04:24:15 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80ef6226-405e-ee11-906e-a4bf01973fda --hostid=80ef6226-405e-ee11-906e-a4bf01973fda -t tcp -n nqn.2016-06.io.spdk:cnode3 -a 10.0.0.2 -s 4420 00:27:02.953 04:24:17 -- target/multiconnection.sh@30 -- # waitforserial SPDK3 00:27:02.953 04:24:17 -- common/autotest_common.sh@1177 -- # local i=0 00:27:02.953 04:24:17 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:27:02.953 04:24:17 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:27:02.953 04:24:17 -- common/autotest_common.sh@1184 -- # sleep 2 00:27:05.487 04:24:19 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:27:05.487 04:24:19 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:27:05.487 04:24:19 -- common/autotest_common.sh@1186 -- # grep -c SPDK3 00:27:05.487 04:24:19 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:27:05.487 04:24:19 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:27:05.487 04:24:19 -- common/autotest_common.sh@1187 -- # return 0 00:27:05.487 04:24:19 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:27:05.487 04:24:19 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80ef6226-405e-ee11-906e-a4bf01973fda --hostid=80ef6226-405e-ee11-906e-a4bf01973fda -t tcp -n nqn.2016-06.io.spdk:cnode4 -a 10.0.0.2 -s 4420 00:27:06.422 04:24:21 -- target/multiconnection.sh@30 -- # waitforserial SPDK4 00:27:06.422 04:24:21 -- common/autotest_common.sh@1177 -- # local i=0 00:27:06.422 04:24:21 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:27:06.422 04:24:21 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:27:06.422 04:24:21 -- common/autotest_common.sh@1184 -- # sleep 2 00:27:08.957 04:24:23 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:27:08.957 04:24:23 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:27:08.957 04:24:23 -- common/autotest_common.sh@1186 -- # grep -c SPDK4 00:27:08.957 04:24:23 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:27:08.958 04:24:23 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:27:08.958 04:24:23 -- common/autotest_common.sh@1187 -- # return 0 00:27:08.958 04:24:23 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:27:08.958 04:24:23 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80ef6226-405e-ee11-906e-a4bf01973fda --hostid=80ef6226-405e-ee11-906e-a4bf01973fda -t tcp -n nqn.2016-06.io.spdk:cnode5 -a 10.0.0.2 -s 4420 00:27:10.333 04:24:24 -- target/multiconnection.sh@30 -- # waitforserial SPDK5 00:27:10.333 04:24:24 -- common/autotest_common.sh@1177 -- # local i=0 00:27:10.333 04:24:24 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:27:10.333 04:24:24 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:27:10.333 04:24:24 -- common/autotest_common.sh@1184 -- # sleep 2 00:27:12.241 04:24:26 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:27:12.241 04:24:26 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:27:12.241 04:24:26 -- common/autotest_common.sh@1186 -- # grep -c SPDK5 00:27:12.241 04:24:26 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:27:12.241 04:24:26 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:27:12.241 04:24:26 -- common/autotest_common.sh@1187 -- # return 0 00:27:12.241 04:24:26 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:27:12.241 04:24:26 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80ef6226-405e-ee11-906e-a4bf01973fda --hostid=80ef6226-405e-ee11-906e-a4bf01973fda -t tcp -n nqn.2016-06.io.spdk:cnode6 -a 10.0.0.2 -s 4420 00:27:13.622 04:24:28 -- target/multiconnection.sh@30 -- # waitforserial SPDK6 00:27:13.622 04:24:28 -- common/autotest_common.sh@1177 -- # local i=0 00:27:13.622 04:24:28 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:27:13.622 04:24:28 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:27:13.622 04:24:28 -- common/autotest_common.sh@1184 -- # sleep 2 00:27:16.151 04:24:30 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:27:16.151 04:24:30 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:27:16.151 04:24:30 -- common/autotest_common.sh@1186 -- # grep -c SPDK6 00:27:16.151 04:24:30 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:27:16.151 04:24:30 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:27:16.151 04:24:30 -- common/autotest_common.sh@1187 -- # return 0 00:27:16.151 04:24:30 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:27:16.151 04:24:30 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80ef6226-405e-ee11-906e-a4bf01973fda --hostid=80ef6226-405e-ee11-906e-a4bf01973fda -t tcp -n nqn.2016-06.io.spdk:cnode7 -a 10.0.0.2 -s 4420 00:27:17.556 04:24:32 -- target/multiconnection.sh@30 -- # waitforserial SPDK7 00:27:17.556 04:24:32 -- common/autotest_common.sh@1177 -- # local i=0 00:27:17.556 04:24:32 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:27:17.556 04:24:32 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:27:17.556 04:24:32 -- common/autotest_common.sh@1184 -- # sleep 2 00:27:19.460 04:24:34 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:27:19.460 04:24:34 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:27:19.460 04:24:34 -- common/autotest_common.sh@1186 -- # grep -c SPDK7 00:27:19.460 04:24:34 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:27:19.460 04:24:34 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:27:19.460 04:24:34 -- common/autotest_common.sh@1187 -- # return 0 00:27:19.461 04:24:34 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:27:19.461 04:24:34 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80ef6226-405e-ee11-906e-a4bf01973fda --hostid=80ef6226-405e-ee11-906e-a4bf01973fda -t tcp -n nqn.2016-06.io.spdk:cnode8 -a 10.0.0.2 -s 4420 00:27:21.361 04:24:35 -- target/multiconnection.sh@30 -- # waitforserial SPDK8 00:27:21.361 04:24:35 -- common/autotest_common.sh@1177 -- # local i=0 00:27:21.361 04:24:35 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:27:21.361 04:24:35 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:27:21.361 04:24:35 -- common/autotest_common.sh@1184 -- # sleep 2 00:27:23.263 04:24:37 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:27:23.263 04:24:37 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:27:23.263 04:24:37 -- common/autotest_common.sh@1186 -- # grep -c SPDK8 00:27:23.263 04:24:37 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:27:23.263 04:24:37 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:27:23.263 04:24:37 -- common/autotest_common.sh@1187 -- # return 0 00:27:23.263 04:24:37 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:27:23.263 04:24:37 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80ef6226-405e-ee11-906e-a4bf01973fda --hostid=80ef6226-405e-ee11-906e-a4bf01973fda -t tcp -n nqn.2016-06.io.spdk:cnode9 -a 10.0.0.2 -s 4420 00:27:25.171 04:24:39 -- target/multiconnection.sh@30 -- # waitforserial SPDK9 00:27:25.171 04:24:39 -- common/autotest_common.sh@1177 -- # local i=0 00:27:25.171 04:24:39 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:27:25.171 04:24:39 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:27:25.171 04:24:39 -- common/autotest_common.sh@1184 -- # sleep 2 00:27:27.073 04:24:41 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:27:27.073 04:24:41 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:27:27.073 04:24:41 -- common/autotest_common.sh@1186 -- # grep -c SPDK9 00:27:27.073 04:24:41 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:27:27.073 04:24:41 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:27:27.073 04:24:41 -- common/autotest_common.sh@1187 -- # return 0 00:27:27.073 04:24:41 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:27:27.073 04:24:41 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80ef6226-405e-ee11-906e-a4bf01973fda --hostid=80ef6226-405e-ee11-906e-a4bf01973fda -t tcp -n nqn.2016-06.io.spdk:cnode10 -a 10.0.0.2 -s 4420 00:27:28.980 04:24:43 -- target/multiconnection.sh@30 -- # waitforserial SPDK10 00:27:28.980 04:24:43 -- common/autotest_common.sh@1177 -- # local i=0 00:27:28.980 04:24:43 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:27:28.980 04:24:43 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:27:28.980 04:24:43 -- common/autotest_common.sh@1184 -- # sleep 2 00:27:30.881 04:24:45 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:27:30.881 04:24:45 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:27:30.881 04:24:45 -- common/autotest_common.sh@1186 -- # grep -c SPDK10 00:27:30.881 04:24:45 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:27:30.881 04:24:45 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:27:30.881 04:24:45 -- common/autotest_common.sh@1187 -- # return 0 00:27:30.881 04:24:45 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:27:30.881 04:24:45 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80ef6226-405e-ee11-906e-a4bf01973fda --hostid=80ef6226-405e-ee11-906e-a4bf01973fda -t tcp -n nqn.2016-06.io.spdk:cnode11 -a 10.0.0.2 -s 4420 00:27:32.788 04:24:47 -- target/multiconnection.sh@30 -- # waitforserial SPDK11 00:27:32.788 04:24:47 -- common/autotest_common.sh@1177 -- # local i=0 00:27:32.788 04:24:47 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:27:32.788 04:24:47 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:27:32.788 04:24:47 -- common/autotest_common.sh@1184 -- # sleep 2 00:27:34.698 04:24:49 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:27:34.698 04:24:49 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:27:34.698 04:24:49 -- common/autotest_common.sh@1186 -- # grep -c SPDK11 00:27:34.698 04:24:49 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:27:34.698 04:24:49 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:27:34.698 04:24:49 -- common/autotest_common.sh@1187 -- # return 0 00:27:34.698 04:24:49 -- target/multiconnection.sh@33 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 262144 -d 64 -t read -r 10 00:27:34.698 [global] 00:27:34.698 thread=1 00:27:34.698 invalidate=1 00:27:34.698 rw=read 00:27:34.698 time_based=1 00:27:34.698 runtime=10 00:27:34.698 ioengine=libaio 00:27:34.698 direct=1 00:27:34.698 bs=262144 00:27:34.698 iodepth=64 00:27:34.698 norandommap=1 00:27:34.698 numjobs=1 00:27:34.698 00:27:34.698 [job0] 00:27:34.698 filename=/dev/nvme0n1 00:27:34.698 [job1] 00:27:34.698 filename=/dev/nvme10n1 00:27:34.698 [job2] 00:27:34.698 filename=/dev/nvme1n1 00:27:34.698 [job3] 00:27:34.698 filename=/dev/nvme2n1 00:27:34.698 [job4] 00:27:34.698 filename=/dev/nvme3n1 00:27:34.698 [job5] 00:27:34.698 filename=/dev/nvme4n1 00:27:34.698 [job6] 00:27:34.698 filename=/dev/nvme5n1 00:27:34.698 [job7] 00:27:34.698 filename=/dev/nvme6n1 00:27:34.698 [job8] 00:27:34.698 filename=/dev/nvme7n1 00:27:34.698 [job9] 00:27:34.698 filename=/dev/nvme8n1 00:27:34.698 [job10] 00:27:34.698 filename=/dev/nvme9n1 00:27:34.958 Could not set queue depth (nvme0n1) 00:27:34.958 Could not set queue depth (nvme10n1) 00:27:34.958 Could not set queue depth (nvme1n1) 00:27:34.958 Could not set queue depth (nvme2n1) 00:27:34.958 Could not set queue depth (nvme3n1) 00:27:34.958 Could not set queue depth (nvme4n1) 00:27:34.958 Could not set queue depth (nvme5n1) 00:27:34.958 Could not set queue depth (nvme6n1) 00:27:34.958 Could not set queue depth (nvme7n1) 00:27:34.958 Could not set queue depth (nvme8n1) 00:27:34.958 Could not set queue depth (nvme9n1) 00:27:35.216 job0: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:27:35.216 job1: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:27:35.216 job2: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:27:35.216 job3: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:27:35.216 job4: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:27:35.216 job5: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:27:35.216 job6: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:27:35.216 job7: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:27:35.216 job8: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:27:35.216 job9: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:27:35.216 job10: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:27:35.216 fio-3.35 00:27:35.216 Starting 11 threads 00:27:47.484 00:27:47.484 job0: (groupid=0, jobs=1): err= 0: pid=4140818: Tue May 14 04:25:00 2024 00:27:47.484 read: IOPS=711, BW=178MiB/s (186MB/s)(1795MiB/10091msec) 00:27:47.484 slat (usec): min=11, max=70142, avg=1351.73, stdev=3578.60 00:27:47.484 clat (msec): min=25, max=188, avg=88.55, stdev=19.97 00:27:47.484 lat (msec): min=25, max=188, avg=89.90, stdev=20.32 00:27:47.484 clat percentiles (msec): 00:27:47.484 | 1.00th=[ 49], 5.00th=[ 58], 10.00th=[ 63], 20.00th=[ 73], 00:27:47.484 | 30.00th=[ 81], 40.00th=[ 84], 50.00th=[ 87], 60.00th=[ 90], 00:27:47.484 | 70.00th=[ 95], 80.00th=[ 106], 90.00th=[ 115], 95.00th=[ 124], 00:27:47.484 | 99.00th=[ 140], 99.50th=[ 150], 99.90th=[ 169], 99.95th=[ 169], 00:27:47.484 | 99.99th=[ 188] 00:27:47.484 bw ( KiB/s): min=135680, max=243712, per=8.48%, avg=182184.10, stdev=29280.76, samples=20 00:27:47.484 iops : min= 530, max= 952, avg=711.65, stdev=114.39, samples=20 00:27:47.484 lat (msec) : 50=1.25%, 100=74.23%, 250=24.52% 00:27:47.484 cpu : usr=0.22%, sys=2.25%, ctx=1500, majf=0, minf=4097 00:27:47.484 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.1% 00:27:47.484 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:47.484 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:27:47.484 issued rwts: total=7179,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:47.484 latency : target=0, window=0, percentile=100.00%, depth=64 00:27:47.484 job1: (groupid=0, jobs=1): err= 0: pid=4140831: Tue May 14 04:25:00 2024 00:27:47.484 read: IOPS=559, BW=140MiB/s (147MB/s)(1406MiB/10055msec) 00:27:47.484 slat (usec): min=10, max=83690, avg=1739.92, stdev=4866.66 00:27:47.484 clat (msec): min=43, max=225, avg=112.60, stdev=41.87 00:27:47.484 lat (msec): min=48, max=225, avg=114.34, stdev=42.56 00:27:47.484 clat percentiles (msec): 00:27:47.484 | 1.00th=[ 55], 5.00th=[ 59], 10.00th=[ 61], 20.00th=[ 66], 00:27:47.484 | 30.00th=[ 75], 40.00th=[ 90], 50.00th=[ 110], 60.00th=[ 138], 00:27:47.484 | 70.00th=[ 148], 80.00th=[ 155], 90.00th=[ 165], 95.00th=[ 176], 00:27:47.484 | 99.00th=[ 188], 99.50th=[ 194], 99.90th=[ 213], 99.95th=[ 226], 00:27:47.484 | 99.99th=[ 226] 00:27:47.484 bw ( KiB/s): min=91136, max=244224, per=6.63%, avg=142336.00, stdev=54193.13, samples=20 00:27:47.484 iops : min= 356, max= 954, avg=556.00, stdev=211.69, samples=20 00:27:47.484 lat (msec) : 50=0.11%, 100=45.22%, 250=54.67% 00:27:47.484 cpu : usr=0.20%, sys=1.92%, ctx=1198, majf=0, minf=4097 00:27:47.484 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:27:47.484 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:47.484 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:27:47.484 issued rwts: total=5623,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:47.484 latency : target=0, window=0, percentile=100.00%, depth=64 00:27:47.484 job2: (groupid=0, jobs=1): err= 0: pid=4140860: Tue May 14 04:25:00 2024 00:27:47.484 read: IOPS=689, BW=172MiB/s (181MB/s)(1732MiB/10045msec) 00:27:47.484 slat (usec): min=5, max=89218, avg=1062.07, stdev=4771.74 00:27:47.484 clat (msec): min=2, max=252, avg=91.67, stdev=45.47 00:27:47.484 lat (msec): min=2, max=257, avg=92.73, stdev=46.25 00:27:47.484 clat percentiles (msec): 00:27:47.484 | 1.00th=[ 10], 5.00th=[ 19], 10.00th=[ 28], 20.00th=[ 48], 00:27:47.484 | 30.00th=[ 62], 40.00th=[ 83], 50.00th=[ 95], 60.00th=[ 109], 00:27:47.484 | 70.00th=[ 118], 80.00th=[ 138], 90.00th=[ 150], 95.00th=[ 159], 00:27:47.484 | 99.00th=[ 182], 99.50th=[ 188], 99.90th=[ 213], 99.95th=[ 232], 00:27:47.484 | 99.99th=[ 253] 00:27:47.484 bw ( KiB/s): min=98304, max=350208, per=8.19%, avg=175780.25, stdev=73913.20, samples=20 00:27:47.484 iops : min= 384, max= 1368, avg=686.60, stdev=288.76, samples=20 00:27:47.484 lat (msec) : 4=0.10%, 10=1.21%, 20=4.75%, 50=15.49%, 100=32.56% 00:27:47.484 lat (msec) : 250=45.87%, 500=0.03% 00:27:47.484 cpu : usr=0.17%, sys=1.66%, ctx=1540, majf=0, minf=4097 00:27:47.484 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:27:47.484 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:47.484 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:27:47.484 issued rwts: total=6929,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:47.484 latency : target=0, window=0, percentile=100.00%, depth=64 00:27:47.484 job3: (groupid=0, jobs=1): err= 0: pid=4140871: Tue May 14 04:25:00 2024 00:27:47.484 read: IOPS=687, BW=172MiB/s (180MB/s)(1735MiB/10090msec) 00:27:47.484 slat (usec): min=11, max=44776, avg=1440.55, stdev=3639.74 00:27:47.484 clat (msec): min=35, max=199, avg=91.55, stdev=19.21 00:27:47.484 lat (msec): min=35, max=199, avg=92.99, stdev=19.54 00:27:47.484 clat percentiles (msec): 00:27:47.484 | 1.00th=[ 57], 5.00th=[ 63], 10.00th=[ 68], 20.00th=[ 78], 00:27:47.484 | 30.00th=[ 83], 40.00th=[ 86], 50.00th=[ 89], 60.00th=[ 92], 00:27:47.484 | 70.00th=[ 97], 80.00th=[ 109], 90.00th=[ 117], 95.00th=[ 126], 00:27:47.484 | 99.00th=[ 144], 99.50th=[ 153], 99.90th=[ 190], 99.95th=[ 190], 00:27:47.484 | 99.99th=[ 201] 00:27:47.484 bw ( KiB/s): min=132096, max=229888, per=8.20%, avg=176038.85, stdev=26888.45, samples=20 00:27:47.484 iops : min= 516, max= 898, avg=687.65, stdev=105.04, samples=20 00:27:47.484 lat (msec) : 50=0.17%, 100=72.81%, 250=27.02% 00:27:47.484 cpu : usr=0.10%, sys=2.30%, ctx=1448, majf=0, minf=4097 00:27:47.484 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:27:47.484 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:47.484 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:27:47.484 issued rwts: total=6940,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:47.484 latency : target=0, window=0, percentile=100.00%, depth=64 00:27:47.484 job4: (groupid=0, jobs=1): err= 0: pid=4140879: Tue May 14 04:25:00 2024 00:27:47.484 read: IOPS=1256, BW=314MiB/s (329MB/s)(3171MiB/10094msec) 00:27:47.484 slat (usec): min=7, max=58488, avg=775.64, stdev=2296.74 00:27:47.484 clat (msec): min=3, max=169, avg=50.13, stdev=27.19 00:27:47.484 lat (msec): min=3, max=169, avg=50.91, stdev=27.59 00:27:47.484 clat percentiles (msec): 00:27:47.484 | 1.00th=[ 19], 5.00th=[ 27], 10.00th=[ 28], 20.00th=[ 29], 00:27:47.484 | 30.00th=[ 31], 40.00th=[ 32], 50.00th=[ 36], 60.00th=[ 51], 00:27:47.484 | 70.00th=[ 62], 80.00th=[ 74], 90.00th=[ 89], 95.00th=[ 107], 00:27:47.484 | 99.00th=[ 133], 99.50th=[ 142], 99.90th=[ 161], 99.95th=[ 163], 00:27:47.484 | 99.99th=[ 169] 00:27:47.484 bw ( KiB/s): min=143360, max=553984, per=15.04%, avg=323020.80, stdev=132834.87, samples=20 00:27:47.484 iops : min= 560, max= 2164, avg=1261.80, stdev=518.89, samples=20 00:27:47.484 lat (msec) : 4=0.01%, 10=0.40%, 20=0.63%, 50=59.17%, 100=33.39% 00:27:47.484 lat (msec) : 250=6.39% 00:27:47.484 cpu : usr=0.13%, sys=2.46%, ctx=2454, majf=0, minf=4097 00:27:47.484 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:27:47.484 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:47.484 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:27:47.484 issued rwts: total=12682,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:47.484 latency : target=0, window=0, percentile=100.00%, depth=64 00:27:47.484 job5: (groupid=0, jobs=1): err= 0: pid=4140903: Tue May 14 04:25:00 2024 00:27:47.484 read: IOPS=655, BW=164MiB/s (172MB/s)(1653MiB/10089msec) 00:27:47.484 slat (usec): min=5, max=126641, avg=992.92, stdev=4709.59 00:27:47.484 clat (msec): min=2, max=252, avg=96.62, stdev=48.94 00:27:47.484 lat (msec): min=2, max=252, avg=97.62, stdev=49.65 00:27:47.484 clat percentiles (msec): 00:27:47.484 | 1.00th=[ 11], 5.00th=[ 23], 10.00th=[ 29], 20.00th=[ 46], 00:27:47.484 | 30.00th=[ 68], 40.00th=[ 84], 50.00th=[ 92], 60.00th=[ 109], 00:27:47.484 | 70.00th=[ 134], 80.00th=[ 146], 90.00th=[ 161], 95.00th=[ 174], 00:27:47.484 | 99.00th=[ 190], 99.50th=[ 194], 99.90th=[ 230], 99.95th=[ 247], 00:27:47.484 | 99.99th=[ 253] 00:27:47.484 bw ( KiB/s): min=98816, max=304640, per=7.80%, avg=167577.60, stdev=57012.96, samples=20 00:27:47.484 iops : min= 386, max= 1190, avg=654.60, stdev=222.71, samples=20 00:27:47.484 lat (msec) : 4=0.06%, 10=0.71%, 20=3.25%, 50=17.59%, 100=34.16% 00:27:47.484 lat (msec) : 250=44.19%, 500=0.03% 00:27:47.484 cpu : usr=0.14%, sys=1.76%, ctx=1590, majf=0, minf=4097 00:27:47.484 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.0% 00:27:47.484 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:47.484 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:27:47.484 issued rwts: total=6610,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:47.484 latency : target=0, window=0, percentile=100.00%, depth=64 00:27:47.484 job6: (groupid=0, jobs=1): err= 0: pid=4140914: Tue May 14 04:25:00 2024 00:27:47.484 read: IOPS=568, BW=142MiB/s (149MB/s)(1434MiB/10088msec) 00:27:47.484 slat (usec): min=10, max=104250, avg=1636.12, stdev=4856.91 00:27:47.484 clat (msec): min=13, max=251, avg=110.90, stdev=42.75 00:27:47.484 lat (msec): min=13, max=251, avg=112.53, stdev=43.51 00:27:47.484 clat percentiles (msec): 00:27:47.484 | 1.00th=[ 46], 5.00th=[ 57], 10.00th=[ 61], 20.00th=[ 65], 00:27:47.484 | 30.00th=[ 72], 40.00th=[ 87], 50.00th=[ 107], 60.00th=[ 136], 00:27:47.484 | 70.00th=[ 146], 80.00th=[ 153], 90.00th=[ 165], 95.00th=[ 176], 00:27:47.484 | 99.00th=[ 192], 99.50th=[ 197], 99.90th=[ 209], 99.95th=[ 218], 00:27:47.484 | 99.99th=[ 253] 00:27:47.484 bw ( KiB/s): min=93184, max=258048, per=6.76%, avg=145177.60, stdev=54257.63, samples=20 00:27:47.485 iops : min= 364, max= 1008, avg=567.10, stdev=211.94, samples=20 00:27:47.485 lat (msec) : 20=0.03%, 50=1.53%, 100=44.99%, 250=53.42%, 500=0.02% 00:27:47.485 cpu : usr=0.12%, sys=1.33%, ctx=1273, majf=0, minf=3598 00:27:47.485 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:27:47.485 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:47.485 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:27:47.485 issued rwts: total=5734,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:47.485 latency : target=0, window=0, percentile=100.00%, depth=64 00:27:47.485 job7: (groupid=0, jobs=1): err= 0: pid=4140923: Tue May 14 04:25:00 2024 00:27:47.485 read: IOPS=834, BW=209MiB/s (219MB/s)(2104MiB/10088msec) 00:27:47.485 slat (usec): min=9, max=106667, avg=1006.61, stdev=3899.13 00:27:47.485 clat (msec): min=4, max=237, avg=75.68, stdev=41.64 00:27:47.485 lat (msec): min=4, max=238, avg=76.68, stdev=42.14 00:27:47.485 clat percentiles (msec): 00:27:47.485 | 1.00th=[ 12], 5.00th=[ 41], 10.00th=[ 45], 20.00th=[ 50], 00:27:47.485 | 30.00th=[ 52], 40.00th=[ 55], 50.00th=[ 58], 60.00th=[ 62], 00:27:47.485 | 70.00th=[ 71], 80.00th=[ 118], 90.00th=[ 153], 95.00th=[ 165], 00:27:47.485 | 99.00th=[ 180], 99.50th=[ 184], 99.90th=[ 199], 99.95th=[ 222], 00:27:47.485 | 99.99th=[ 239] 00:27:47.485 bw ( KiB/s): min=96256, max=314368, per=9.95%, avg=213771.00, stdev=84739.13, samples=20 00:27:47.485 iops : min= 376, max= 1228, avg=835.00, stdev=331.07, samples=20 00:27:47.485 lat (msec) : 10=0.67%, 20=2.06%, 50=20.42%, 100=53.39%, 250=23.47% 00:27:47.485 cpu : usr=0.17%, sys=2.49%, ctx=1759, majf=0, minf=4097 00:27:47.485 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:27:47.485 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:47.485 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:27:47.485 issued rwts: total=8414,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:47.485 latency : target=0, window=0, percentile=100.00%, depth=64 00:27:47.485 job8: (groupid=0, jobs=1): err= 0: pid=4140959: Tue May 14 04:25:00 2024 00:27:47.485 read: IOPS=691, BW=173MiB/s (181MB/s)(1746MiB/10092msec) 00:27:47.485 slat (usec): min=9, max=108293, avg=822.80, stdev=3602.82 00:27:47.485 clat (msec): min=2, max=196, avg=91.62, stdev=37.65 00:27:47.485 lat (msec): min=2, max=257, avg=92.44, stdev=38.01 00:27:47.485 clat percentiles (msec): 00:27:47.485 | 1.00th=[ 9], 5.00th=[ 26], 10.00th=[ 36], 20.00th=[ 68], 00:27:47.485 | 30.00th=[ 82], 40.00th=[ 86], 50.00th=[ 90], 60.00th=[ 94], 00:27:47.485 | 70.00th=[ 104], 80.00th=[ 118], 90.00th=[ 144], 95.00th=[ 165], 00:27:47.485 | 99.00th=[ 182], 99.50th=[ 186], 99.90th=[ 192], 99.95th=[ 197], 00:27:47.485 | 99.99th=[ 197] 00:27:47.485 bw ( KiB/s): min=100352, max=264192, per=8.25%, avg=177137.70, stdev=40984.38, samples=20 00:27:47.485 iops : min= 392, max= 1032, avg=691.90, stdev=160.17, samples=20 00:27:47.485 lat (msec) : 4=0.23%, 10=1.13%, 20=2.16%, 50=10.53%, 100=53.19% 00:27:47.485 lat (msec) : 250=32.76% 00:27:47.485 cpu : usr=0.17%, sys=1.81%, ctx=1849, majf=0, minf=4097 00:27:47.485 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:27:47.485 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:47.485 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:27:47.485 issued rwts: total=6982,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:47.485 latency : target=0, window=0, percentile=100.00%, depth=64 00:27:47.485 job9: (groupid=0, jobs=1): err= 0: pid=4140975: Tue May 14 04:25:00 2024 00:27:47.485 read: IOPS=1074, BW=269MiB/s (282MB/s)(2711MiB/10087msec) 00:27:47.485 slat (usec): min=8, max=65749, avg=799.39, stdev=2382.87 00:27:47.485 clat (usec): min=1628, max=201342, avg=58702.80, stdev=26350.61 00:27:47.485 lat (usec): min=1651, max=201368, avg=59502.19, stdev=26525.49 00:27:47.485 clat percentiles (msec): 00:27:47.485 | 1.00th=[ 7], 5.00th=[ 28], 10.00th=[ 31], 20.00th=[ 33], 00:27:47.485 | 30.00th=[ 48], 40.00th=[ 55], 50.00th=[ 59], 60.00th=[ 63], 00:27:47.485 | 70.00th=[ 67], 80.00th=[ 72], 90.00th=[ 85], 95.00th=[ 105], 00:27:47.485 | 99.00th=[ 163], 99.50th=[ 182], 99.90th=[ 190], 99.95th=[ 190], 00:27:47.485 | 99.99th=[ 197] 00:27:47.485 bw ( KiB/s): min=148992, max=459776, per=12.85%, avg=275942.40, stdev=80734.58, samples=20 00:27:47.485 iops : min= 582, max= 1796, avg=1077.90, stdev=315.37, samples=20 00:27:47.485 lat (msec) : 2=0.07%, 4=0.53%, 10=0.90%, 20=1.54%, 50=29.28% 00:27:47.485 lat (msec) : 100=61.84%, 250=5.84% 00:27:47.485 cpu : usr=0.20%, sys=2.85%, ctx=2245, majf=0, minf=4097 00:27:47.485 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.4% 00:27:47.485 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:47.485 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:27:47.485 issued rwts: total=10843,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:47.485 latency : target=0, window=0, percentile=100.00%, depth=64 00:27:47.485 job10: (groupid=0, jobs=1): err= 0: pid=4140991: Tue May 14 04:25:00 2024 00:27:47.485 read: IOPS=673, BW=168MiB/s (177MB/s)(1686MiB/10012msec) 00:27:47.485 slat (usec): min=7, max=110881, avg=1140.47, stdev=4826.65 00:27:47.485 clat (msec): min=3, max=258, avg=93.85, stdev=50.75 00:27:47.485 lat (msec): min=3, max=271, avg=94.99, stdev=51.51 00:27:47.485 clat percentiles (msec): 00:27:47.485 | 1.00th=[ 7], 5.00th=[ 17], 10.00th=[ 26], 20.00th=[ 39], 00:27:47.485 | 30.00th=[ 64], 40.00th=[ 81], 50.00th=[ 89], 60.00th=[ 106], 00:27:47.485 | 70.00th=[ 133], 80.00th=[ 148], 90.00th=[ 159], 95.00th=[ 171], 00:27:47.485 | 99.00th=[ 199], 99.50th=[ 203], 99.90th=[ 222], 99.95th=[ 232], 00:27:47.485 | 99.99th=[ 259] 00:27:47.485 bw ( KiB/s): min=95744, max=341504, per=7.96%, avg=170982.40, stdev=60487.77, samples=20 00:27:47.485 iops : min= 374, max= 1334, avg=667.90, stdev=236.28, samples=20 00:27:47.485 lat (msec) : 4=0.07%, 10=2.00%, 20=4.23%, 50=18.01%, 100=33.71% 00:27:47.485 lat (msec) : 250=41.96%, 500=0.01% 00:27:47.485 cpu : usr=0.16%, sys=1.81%, ctx=1567, majf=0, minf=4097 00:27:47.485 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:27:47.485 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:47.485 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:27:47.485 issued rwts: total=6742,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:47.485 latency : target=0, window=0, percentile=100.00%, depth=64 00:27:47.485 00:27:47.485 Run status group 0 (all jobs): 00:27:47.485 READ: bw=2097MiB/s (2199MB/s), 140MiB/s-314MiB/s (147MB/s-329MB/s), io=20.7GiB (22.2GB), run=10012-10094msec 00:27:47.485 00:27:47.485 Disk stats (read/write): 00:27:47.485 nvme0n1: ios=14109/0, merge=0/0, ticks=1231284/0, in_queue=1231284, util=96.91% 00:27:47.485 nvme10n1: ios=10986/0, merge=0/0, ticks=1223979/0, in_queue=1223979, util=97.12% 00:27:47.485 nvme1n1: ios=13582/0, merge=0/0, ticks=1233687/0, in_queue=1233687, util=97.41% 00:27:47.485 nvme2n1: ios=13644/0, merge=0/0, ticks=1229142/0, in_queue=1229142, util=97.59% 00:27:47.485 nvme3n1: ios=25048/0, merge=0/0, ticks=1234967/0, in_queue=1234967, util=97.67% 00:27:47.485 nvme4n1: ios=12966/0, merge=0/0, ticks=1233942/0, in_queue=1233942, util=98.03% 00:27:47.485 nvme5n1: ios=11242/0, merge=0/0, ticks=1228017/0, in_queue=1228017, util=98.22% 00:27:47.485 nvme6n1: ios=16569/0, merge=0/0, ticks=1233872/0, in_queue=1233872, util=98.27% 00:27:47.485 nvme7n1: ios=13698/0, merge=0/0, ticks=1237144/0, in_queue=1237144, util=98.78% 00:27:47.485 nvme8n1: ios=21450/0, merge=0/0, ticks=1236029/0, in_queue=1236029, util=99.02% 00:27:47.485 nvme9n1: ios=13076/0, merge=0/0, ticks=1232205/0, in_queue=1232205, util=99.23% 00:27:47.485 04:25:00 -- target/multiconnection.sh@34 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 262144 -d 64 -t randwrite -r 10 00:27:47.485 [global] 00:27:47.485 thread=1 00:27:47.485 invalidate=1 00:27:47.485 rw=randwrite 00:27:47.485 time_based=1 00:27:47.485 runtime=10 00:27:47.485 ioengine=libaio 00:27:47.485 direct=1 00:27:47.485 bs=262144 00:27:47.485 iodepth=64 00:27:47.485 norandommap=1 00:27:47.485 numjobs=1 00:27:47.485 00:27:47.485 [job0] 00:27:47.485 filename=/dev/nvme0n1 00:27:47.485 [job1] 00:27:47.485 filename=/dev/nvme10n1 00:27:47.485 [job2] 00:27:47.485 filename=/dev/nvme1n1 00:27:47.485 [job3] 00:27:47.485 filename=/dev/nvme2n1 00:27:47.485 [job4] 00:27:47.485 filename=/dev/nvme3n1 00:27:47.485 [job5] 00:27:47.485 filename=/dev/nvme4n1 00:27:47.485 [job6] 00:27:47.485 filename=/dev/nvme5n1 00:27:47.485 [job7] 00:27:47.485 filename=/dev/nvme6n1 00:27:47.485 [job8] 00:27:47.485 filename=/dev/nvme7n1 00:27:47.485 [job9] 00:27:47.485 filename=/dev/nvme8n1 00:27:47.485 [job10] 00:27:47.485 filename=/dev/nvme9n1 00:27:47.485 Could not set queue depth (nvme0n1) 00:27:47.485 Could not set queue depth (nvme10n1) 00:27:47.485 Could not set queue depth (nvme1n1) 00:27:47.485 Could not set queue depth (nvme2n1) 00:27:47.485 Could not set queue depth (nvme3n1) 00:27:47.485 Could not set queue depth (nvme4n1) 00:27:47.485 Could not set queue depth (nvme5n1) 00:27:47.485 Could not set queue depth (nvme6n1) 00:27:47.485 Could not set queue depth (nvme7n1) 00:27:47.485 Could not set queue depth (nvme8n1) 00:27:47.485 Could not set queue depth (nvme9n1) 00:27:47.485 job0: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:27:47.485 job1: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:27:47.485 job2: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:27:47.485 job3: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:27:47.485 job4: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:27:47.485 job5: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:27:47.485 job6: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:27:47.485 job7: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:27:47.485 job8: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:27:47.485 job9: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:27:47.485 job10: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:27:47.486 fio-3.35 00:27:47.486 Starting 11 threads 00:27:57.471 00:27:57.471 job0: (groupid=0, jobs=1): err= 0: pid=4142709: Tue May 14 04:25:11 2024 00:27:57.471 write: IOPS=566, BW=142MiB/s (149MB/s)(1429MiB/10085msec); 0 zone resets 00:27:57.471 slat (usec): min=20, max=55183, avg=1660.19, stdev=3448.70 00:27:57.471 clat (msec): min=8, max=215, avg=111.20, stdev=43.54 00:27:57.471 lat (msec): min=9, max=215, avg=112.86, stdev=44.09 00:27:57.471 clat percentiles (msec): 00:27:57.471 | 1.00th=[ 27], 5.00th=[ 62], 10.00th=[ 65], 20.00th=[ 68], 00:27:57.471 | 30.00th=[ 71], 40.00th=[ 74], 50.00th=[ 125], 60.00th=[ 131], 00:27:57.471 | 70.00th=[ 136], 80.00th=[ 159], 90.00th=[ 169], 95.00th=[ 176], 00:27:57.471 | 99.00th=[ 197], 99.50th=[ 203], 99.90th=[ 207], 99.95th=[ 209], 00:27:57.471 | 99.99th=[ 215] 00:27:57.471 bw ( KiB/s): min=88064, max=242176, per=10.65%, avg=144674.40, stdev=54281.27, samples=20 00:27:57.471 iops : min= 344, max= 946, avg=565.10, stdev=212.02, samples=20 00:27:57.471 lat (msec) : 10=0.03%, 20=0.33%, 50=3.08%, 100=41.41%, 250=55.14% 00:27:57.471 cpu : usr=1.79%, sys=1.53%, ctx=1745, majf=0, minf=1 00:27:57.471 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:27:57.471 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:57.471 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:27:57.471 issued rwts: total=0,5716,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:57.471 latency : target=0, window=0, percentile=100.00%, depth=64 00:27:57.471 job1: (groupid=0, jobs=1): err= 0: pid=4142721: Tue May 14 04:25:11 2024 00:27:57.471 write: IOPS=439, BW=110MiB/s (115MB/s)(1119MiB/10177msec); 0 zone resets 00:27:57.471 slat (usec): min=15, max=39819, avg=2058.08, stdev=4079.91 00:27:57.471 clat (msec): min=5, max=358, avg=143.39, stdev=44.67 00:27:57.471 lat (msec): min=7, max=358, avg=145.45, stdev=45.19 00:27:57.471 clat percentiles (msec): 00:27:57.471 | 1.00th=[ 48], 5.00th=[ 86], 10.00th=[ 92], 20.00th=[ 95], 00:27:57.471 | 30.00th=[ 99], 40.00th=[ 153], 50.00th=[ 161], 60.00th=[ 167], 00:27:57.471 | 70.00th=[ 171], 80.00th=[ 174], 90.00th=[ 182], 95.00th=[ 211], 00:27:57.471 | 99.00th=[ 234], 99.50th=[ 288], 99.90th=[ 347], 99.95th=[ 347], 00:27:57.471 | 99.99th=[ 359] 00:27:57.471 bw ( KiB/s): min=76288, max=179712, per=8.31%, avg=112969.20, stdev=32976.13, samples=20 00:27:57.471 iops : min= 298, max= 702, avg=441.25, stdev=128.79, samples=20 00:27:57.471 lat (msec) : 10=0.07%, 20=0.29%, 50=0.74%, 100=32.73%, 250=65.33% 00:27:57.471 lat (msec) : 500=0.85% 00:27:57.471 cpu : usr=1.32%, sys=1.18%, ctx=1508, majf=0, minf=1 00:27:57.471 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.6% 00:27:57.471 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:57.471 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:27:57.471 issued rwts: total=0,4476,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:57.471 latency : target=0, window=0, percentile=100.00%, depth=64 00:27:57.471 job2: (groupid=0, jobs=1): err= 0: pid=4142722: Tue May 14 04:25:11 2024 00:27:57.471 write: IOPS=364, BW=91.0MiB/s (95.5MB/s)(926MiB/10165msec); 0 zone resets 00:27:57.471 slat (usec): min=14, max=111202, avg=2678.76, stdev=4938.30 00:27:57.471 clat (msec): min=51, max=337, avg=172.99, stdev=17.17 00:27:57.471 lat (msec): min=53, max=337, avg=175.66, stdev=16.67 00:27:57.471 clat percentiles (msec): 00:27:57.471 | 1.00th=[ 155], 5.00th=[ 159], 10.00th=[ 161], 20.00th=[ 165], 00:27:57.471 | 30.00th=[ 167], 40.00th=[ 169], 50.00th=[ 171], 60.00th=[ 174], 00:27:57.471 | 70.00th=[ 176], 80.00th=[ 178], 90.00th=[ 182], 95.00th=[ 197], 00:27:57.471 | 99.00th=[ 249], 99.50th=[ 284], 99.90th=[ 326], 99.95th=[ 338], 00:27:57.471 | 99.99th=[ 338] 00:27:57.471 bw ( KiB/s): min=73728, max=98304, per=6.86%, avg=93149.35, stdev=5513.31, samples=20 00:27:57.471 iops : min= 288, max= 384, avg=363.80, stdev=21.54, samples=20 00:27:57.471 lat (msec) : 100=0.19%, 250=98.81%, 500=1.00% 00:27:57.471 cpu : usr=1.22%, sys=1.52%, ctx=1002, majf=0, minf=1 00:27:57.471 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.9%, >=64=98.3% 00:27:57.471 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:57.471 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:27:57.471 issued rwts: total=0,3702,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:57.471 latency : target=0, window=0, percentile=100.00%, depth=64 00:27:57.471 job3: (groupid=0, jobs=1): err= 0: pid=4142723: Tue May 14 04:25:11 2024 00:27:57.471 write: IOPS=678, BW=170MiB/s (178MB/s)(1708MiB/10067msec); 0 zone resets 00:27:57.471 slat (usec): min=16, max=9662, avg=1391.61, stdev=2482.68 00:27:57.471 clat (msec): min=3, max=145, avg=92.92, stdev=16.13 00:27:57.471 lat (msec): min=5, max=147, avg=94.31, stdev=16.25 00:27:57.471 clat percentiles (msec): 00:27:57.471 | 1.00th=[ 28], 5.00th=[ 68], 10.00th=[ 71], 20.00th=[ 78], 00:27:57.471 | 30.00th=[ 93], 40.00th=[ 96], 50.00th=[ 97], 60.00th=[ 101], 00:27:57.471 | 70.00th=[ 103], 80.00th=[ 104], 90.00th=[ 107], 95.00th=[ 110], 00:27:57.471 | 99.00th=[ 123], 99.50th=[ 129], 99.90th=[ 138], 99.95th=[ 142], 00:27:57.471 | 99.99th=[ 146] 00:27:57.471 bw ( KiB/s): min=149504, max=228352, per=12.75%, avg=173219.25, stdev=23369.99, samples=20 00:27:57.471 iops : min= 584, max= 892, avg=676.55, stdev=91.32, samples=20 00:27:57.471 lat (msec) : 4=0.01%, 10=0.06%, 20=0.40%, 50=1.64%, 100=59.34% 00:27:57.471 lat (msec) : 250=38.55% 00:27:57.471 cpu : usr=1.87%, sys=1.75%, ctx=2037, majf=0, minf=1 00:27:57.471 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:27:57.471 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:57.471 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:27:57.471 issued rwts: total=0,6830,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:57.471 latency : target=0, window=0, percentile=100.00%, depth=64 00:27:57.471 job4: (groupid=0, jobs=1): err= 0: pid=4142724: Tue May 14 04:25:11 2024 00:27:57.471 write: IOPS=539, BW=135MiB/s (142MB/s)(1372MiB/10165msec); 0 zone resets 00:27:57.471 slat (usec): min=16, max=92203, avg=1662.07, stdev=3509.19 00:27:57.471 clat (msec): min=4, max=338, avg=116.81, stdev=45.67 00:27:57.471 lat (msec): min=4, max=338, avg=118.47, stdev=46.25 00:27:57.471 clat percentiles (msec): 00:27:57.471 | 1.00th=[ 27], 5.00th=[ 61], 10.00th=[ 65], 20.00th=[ 74], 00:27:57.471 | 30.00th=[ 95], 40.00th=[ 100], 50.00th=[ 103], 60.00th=[ 107], 00:27:57.471 | 70.00th=[ 161], 80.00th=[ 169], 90.00th=[ 174], 95.00th=[ 178], 00:27:57.471 | 99.00th=[ 236], 99.50th=[ 262], 99.90th=[ 326], 99.95th=[ 326], 00:27:57.471 | 99.99th=[ 338] 00:27:57.471 bw ( KiB/s): min=92672, max=247808, per=10.22%, avg=138881.05, stdev=42585.88, samples=20 00:27:57.471 iops : min= 362, max= 968, avg=542.40, stdev=166.34, samples=20 00:27:57.471 lat (msec) : 10=0.09%, 20=0.36%, 50=2.73%, 100=38.02%, 250=58.24% 00:27:57.471 lat (msec) : 500=0.55% 00:27:57.471 cpu : usr=1.42%, sys=1.49%, ctx=1934, majf=0, minf=1 00:27:57.471 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:27:57.471 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:57.471 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:27:57.471 issued rwts: total=0,5489,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:57.471 latency : target=0, window=0, percentile=100.00%, depth=64 00:27:57.471 job5: (groupid=0, jobs=1): err= 0: pid=4142728: Tue May 14 04:25:11 2024 00:27:57.471 write: IOPS=406, BW=102MiB/s (107MB/s)(1034MiB/10171msec); 0 zone resets 00:27:57.471 slat (usec): min=17, max=89329, avg=2415.17, stdev=4491.65 00:27:57.471 clat (msec): min=111, max=356, avg=154.61, stdev=26.62 00:27:57.471 lat (msec): min=112, max=356, avg=157.02, stdev=26.59 00:27:57.471 clat percentiles (msec): 00:27:57.471 | 1.00th=[ 120], 5.00th=[ 125], 10.00th=[ 127], 20.00th=[ 131], 00:27:57.471 | 30.00th=[ 134], 40.00th=[ 146], 50.00th=[ 157], 60.00th=[ 163], 00:27:57.471 | 70.00th=[ 165], 80.00th=[ 169], 90.00th=[ 190], 95.00th=[ 199], 00:27:57.471 | 99.00th=[ 236], 99.50th=[ 292], 99.90th=[ 347], 99.95th=[ 347], 00:27:57.471 | 99.99th=[ 359] 00:27:57.471 bw ( KiB/s): min=69632, max=129024, per=7.67%, avg=104258.95, stdev=16067.73, samples=20 00:27:57.471 iops : min= 272, max= 504, avg=407.20, stdev=62.71, samples=20 00:27:57.471 lat (msec) : 250=99.18%, 500=0.82% 00:27:57.471 cpu : usr=1.34%, sys=1.22%, ctx=1087, majf=0, minf=1 00:27:57.471 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.5% 00:27:57.471 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:57.471 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:27:57.471 issued rwts: total=0,4136,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:57.471 latency : target=0, window=0, percentile=100.00%, depth=64 00:27:57.471 job6: (groupid=0, jobs=1): err= 0: pid=4142733: Tue May 14 04:25:11 2024 00:27:57.471 write: IOPS=412, BW=103MiB/s (108MB/s)(1049MiB/10169msec); 0 zone resets 00:27:57.471 slat (usec): min=16, max=54311, avg=2380.32, stdev=4263.39 00:27:57.471 clat (msec): min=25, max=357, avg=152.66, stdev=27.34 00:27:57.471 lat (msec): min=25, max=357, avg=155.04, stdev=27.36 00:27:57.471 clat percentiles (msec): 00:27:57.471 | 1.00th=[ 108], 5.00th=[ 123], 10.00th=[ 127], 20.00th=[ 131], 00:27:57.471 | 30.00th=[ 133], 40.00th=[ 144], 50.00th=[ 155], 60.00th=[ 161], 00:27:57.471 | 70.00th=[ 165], 80.00th=[ 169], 90.00th=[ 184], 95.00th=[ 192], 00:27:57.471 | 99.00th=[ 247], 99.50th=[ 300], 99.90th=[ 347], 99.95th=[ 347], 00:27:57.471 | 99.99th=[ 359] 00:27:57.471 bw ( KiB/s): min=84480, max=126976, per=7.79%, avg=105794.55, stdev=14200.36, samples=20 00:27:57.471 iops : min= 330, max= 496, avg=413.20, stdev=55.41, samples=20 00:27:57.471 lat (msec) : 50=0.29%, 100=0.67%, 250=98.14%, 500=0.91% 00:27:57.471 cpu : usr=1.45%, sys=1.09%, ctx=1100, majf=0, minf=1 00:27:57.471 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.5% 00:27:57.471 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:57.471 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:27:57.471 issued rwts: total=0,4196,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:57.471 latency : target=0, window=0, percentile=100.00%, depth=64 00:27:57.471 job7: (groupid=0, jobs=1): err= 0: pid=4142740: Tue May 14 04:25:11 2024 00:27:57.472 write: IOPS=697, BW=174MiB/s (183MB/s)(1773MiB/10174msec); 0 zone resets 00:27:57.472 slat (usec): min=16, max=64699, avg=1357.10, stdev=2962.60 00:27:57.472 clat (msec): min=3, max=381, avg=90.40, stdev=38.90 00:27:57.472 lat (msec): min=5, max=381, avg=91.76, stdev=39.30 00:27:57.472 clat percentiles (msec): 00:27:57.472 | 1.00th=[ 21], 5.00th=[ 52], 10.00th=[ 63], 20.00th=[ 66], 00:27:57.472 | 30.00th=[ 67], 40.00th=[ 81], 50.00th=[ 92], 60.00th=[ 94], 00:27:57.472 | 70.00th=[ 97], 80.00th=[ 103], 90.00th=[ 111], 95.00th=[ 188], 00:27:57.472 | 99.00th=[ 218], 99.50th=[ 284], 99.90th=[ 359], 99.95th=[ 372], 00:27:57.472 | 99.99th=[ 380] 00:27:57.472 bw ( KiB/s): min=77824, max=275968, per=13.24%, avg=179895.95, stdev=53539.71, samples=20 00:27:57.472 iops : min= 304, max= 1078, avg=702.65, stdev=209.14, samples=20 00:27:57.472 lat (msec) : 4=0.01%, 10=0.18%, 20=0.75%, 50=3.91%, 100=72.39% 00:27:57.472 lat (msec) : 250=22.11%, 500=0.65% 00:27:57.472 cpu : usr=1.93%, sys=1.64%, ctx=2052, majf=0, minf=1 00:27:57.472 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:27:57.472 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:57.472 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:27:57.472 issued rwts: total=0,7092,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:57.472 latency : target=0, window=0, percentile=100.00%, depth=64 00:27:57.472 job8: (groupid=0, jobs=1): err= 0: pid=4142743: Tue May 14 04:25:11 2024 00:27:57.472 write: IOPS=368, BW=92.2MiB/s (96.6MB/s)(937MiB/10166msec); 0 zone resets 00:27:57.472 slat (usec): min=16, max=24158, avg=2662.60, stdev=4590.56 00:27:57.472 clat (msec): min=26, max=339, avg=170.86, stdev=19.16 00:27:57.472 lat (msec): min=26, max=339, avg=173.52, stdev=18.86 00:27:57.472 clat percentiles (msec): 00:27:57.472 | 1.00th=[ 96], 5.00th=[ 157], 10.00th=[ 161], 20.00th=[ 165], 00:27:57.472 | 30.00th=[ 167], 40.00th=[ 169], 50.00th=[ 171], 60.00th=[ 174], 00:27:57.472 | 70.00th=[ 176], 80.00th=[ 176], 90.00th=[ 180], 95.00th=[ 188], 00:27:57.472 | 99.00th=[ 241], 99.50th=[ 284], 99.90th=[ 330], 99.95th=[ 338], 00:27:57.472 | 99.99th=[ 338] 00:27:57.472 bw ( KiB/s): min=87888, max=100352, per=6.94%, avg=94327.15, stdev=3122.23, samples=20 00:27:57.472 iops : min= 343, max= 392, avg=368.40, stdev=12.20, samples=20 00:27:57.472 lat (msec) : 50=0.43%, 100=0.64%, 250=98.11%, 500=0.83% 00:27:57.472 cpu : usr=1.40%, sys=1.31%, ctx=986, majf=0, minf=1 00:27:57.472 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.9%, >=64=98.3% 00:27:57.472 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:57.472 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:27:57.472 issued rwts: total=0,3748,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:57.472 latency : target=0, window=0, percentile=100.00%, depth=64 00:27:57.472 job9: (groupid=0, jobs=1): err= 0: pid=4142744: Tue May 14 04:25:11 2024 00:27:57.472 write: IOPS=483, BW=121MiB/s (127MB/s)(1224MiB/10120msec); 0 zone resets 00:27:57.472 slat (usec): min=19, max=106406, avg=1915.01, stdev=4707.47 00:27:57.472 clat (msec): min=4, max=323, avg=130.35, stdev=49.91 00:27:57.472 lat (msec): min=4, max=323, avg=132.27, stdev=50.56 00:27:57.472 clat percentiles (msec): 00:27:57.472 | 1.00th=[ 16], 5.00th=[ 69], 10.00th=[ 88], 20.00th=[ 92], 00:27:57.472 | 30.00th=[ 94], 40.00th=[ 96], 50.00th=[ 108], 60.00th=[ 157], 00:27:57.472 | 70.00th=[ 165], 80.00th=[ 171], 90.00th=[ 192], 95.00th=[ 218], 00:27:57.472 | 99.00th=[ 257], 99.50th=[ 279], 99.90th=[ 313], 99.95th=[ 313], 00:27:57.472 | 99.99th=[ 326] 00:27:57.472 bw ( KiB/s): min=64512, max=176128, per=9.10%, avg=123664.00, stdev=38219.04, samples=20 00:27:57.472 iops : min= 252, max= 688, avg=483.00, stdev=149.24, samples=20 00:27:57.472 lat (msec) : 10=0.22%, 20=1.27%, 50=2.31%, 100=44.93%, 250=50.27% 00:27:57.472 lat (msec) : 500=1.00% 00:27:57.472 cpu : usr=1.46%, sys=1.24%, ctx=1595, majf=0, minf=1 00:27:57.472 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.7% 00:27:57.472 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:57.472 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:27:57.472 issued rwts: total=0,4894,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:57.472 latency : target=0, window=0, percentile=100.00%, depth=64 00:27:57.472 job10: (groupid=0, jobs=1): err= 0: pid=4142745: Tue May 14 04:25:11 2024 00:27:57.472 write: IOPS=367, BW=91.8MiB/s (96.3MB/s)(934MiB/10166msec); 0 zone resets 00:27:57.472 slat (usec): min=15, max=69856, avg=2661.82, stdev=4715.67 00:27:57.472 clat (msec): min=6, max=339, avg=171.50, stdev=19.57 00:27:57.472 lat (msec): min=9, max=339, avg=174.17, stdev=19.25 00:27:57.472 clat percentiles (msec): 00:27:57.472 | 1.00th=[ 106], 5.00th=[ 159], 10.00th=[ 161], 20.00th=[ 165], 00:27:57.472 | 30.00th=[ 167], 40.00th=[ 169], 50.00th=[ 171], 60.00th=[ 174], 00:27:57.472 | 70.00th=[ 176], 80.00th=[ 178], 90.00th=[ 180], 95.00th=[ 192], 00:27:57.472 | 99.00th=[ 241], 99.50th=[ 284], 99.90th=[ 330], 99.95th=[ 338], 00:27:57.472 | 99.99th=[ 338] 00:27:57.472 bw ( KiB/s): min=83968, max=100864, per=6.92%, avg=93968.50, stdev=3894.82, samples=20 00:27:57.472 iops : min= 328, max= 394, avg=367.00, stdev=15.23, samples=20 00:27:57.472 lat (msec) : 10=0.05%, 20=0.29%, 50=0.13%, 100=0.43%, 250=98.21% 00:27:57.472 lat (msec) : 500=0.88% 00:27:57.472 cpu : usr=1.30%, sys=1.38%, ctx=1010, majf=0, minf=1 00:27:57.472 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.9%, >=64=98.3% 00:27:57.472 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:57.472 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:27:57.472 issued rwts: total=0,3734,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:57.472 latency : target=0, window=0, percentile=100.00%, depth=64 00:27:57.472 00:27:57.472 Run status group 0 (all jobs): 00:27:57.472 WRITE: bw=1327MiB/s (1391MB/s), 91.0MiB/s-174MiB/s (95.5MB/s-183MB/s), io=13.2GiB (14.2GB), run=10067-10177msec 00:27:57.472 00:27:57.472 Disk stats (read/write): 00:27:57.472 nvme0n1: ios=45/11412, merge=0/0, ticks=2098/1225612, in_queue=1227710, util=99.79% 00:27:57.472 nvme10n1: ios=45/8868, merge=0/0, ticks=100/1223249, in_queue=1223349, util=97.07% 00:27:57.472 nvme1n1: ios=5/7332, merge=0/0, ticks=210/1221417, in_queue=1221627, util=97.10% 00:27:57.472 nvme2n1: ios=0/13201, merge=0/0, ticks=0/1198409, in_queue=1198409, util=97.00% 00:27:57.472 nvme3n1: ios=0/10907, merge=0/0, ticks=0/1225452, in_queue=1225452, util=97.20% 00:27:57.472 nvme4n1: ios=43/8194, merge=0/0, ticks=2226/1218228, in_queue=1220454, util=100.00% 00:27:57.472 nvme5n1: ios=0/8317, merge=0/0, ticks=0/1219281, in_queue=1219281, util=97.91% 00:27:57.472 nvme6n1: ios=48/14103, merge=0/0, ticks=1548/1206964, in_queue=1208512, util=100.00% 00:27:57.472 nvme7n1: ios=0/7424, merge=0/0, ticks=0/1220817, in_queue=1220817, util=98.68% 00:27:57.472 nvme8n1: ios=47/9754, merge=0/0, ticks=3383/1214186, in_queue=1217569, util=100.00% 00:27:57.472 nvme9n1: ios=0/7396, merge=0/0, ticks=0/1221526, in_queue=1221526, util=99.11% 00:27:57.472 04:25:11 -- target/multiconnection.sh@36 -- # sync 00:27:57.472 04:25:11 -- target/multiconnection.sh@37 -- # seq 1 11 00:27:57.472 04:25:11 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:27:57.472 04:25:11 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:27:57.472 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:27:57.472 04:25:11 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK1 00:27:57.472 04:25:11 -- common/autotest_common.sh@1198 -- # local i=0 00:27:57.472 04:25:11 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:27:57.472 04:25:11 -- common/autotest_common.sh@1199 -- # grep -q -w SPDK1 00:27:57.472 04:25:11 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:27:57.472 04:25:11 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK1 00:27:57.472 04:25:11 -- common/autotest_common.sh@1210 -- # return 0 00:27:57.472 04:25:11 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:27:57.472 04:25:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:57.472 04:25:11 -- common/autotest_common.sh@10 -- # set +x 00:27:57.472 04:25:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:57.472 04:25:11 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:27:57.472 04:25:11 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode2 00:27:57.730 NQN:nqn.2016-06.io.spdk:cnode2 disconnected 1 controller(s) 00:27:57.730 04:25:12 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK2 00:27:57.730 04:25:12 -- common/autotest_common.sh@1198 -- # local i=0 00:27:57.730 04:25:12 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:27:57.730 04:25:12 -- common/autotest_common.sh@1199 -- # grep -q -w SPDK2 00:27:57.730 04:25:12 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:27:57.730 04:25:12 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK2 00:27:57.730 04:25:12 -- common/autotest_common.sh@1210 -- # return 0 00:27:57.730 04:25:12 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:27:57.730 04:25:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:57.730 04:25:12 -- common/autotest_common.sh@10 -- # set +x 00:27:57.730 04:25:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:57.730 04:25:12 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:27:57.731 04:25:12 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode3 00:27:58.297 NQN:nqn.2016-06.io.spdk:cnode3 disconnected 1 controller(s) 00:27:58.297 04:25:12 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK3 00:27:58.297 04:25:12 -- common/autotest_common.sh@1198 -- # local i=0 00:27:58.297 04:25:12 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:27:58.297 04:25:12 -- common/autotest_common.sh@1199 -- # grep -q -w SPDK3 00:27:58.297 04:25:12 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:27:58.297 04:25:12 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK3 00:27:58.297 04:25:12 -- common/autotest_common.sh@1210 -- # return 0 00:27:58.297 04:25:12 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:27:58.297 04:25:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:58.297 04:25:12 -- common/autotest_common.sh@10 -- # set +x 00:27:58.297 04:25:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:58.297 04:25:12 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:27:58.297 04:25:12 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode4 00:27:58.557 NQN:nqn.2016-06.io.spdk:cnode4 disconnected 1 controller(s) 00:27:58.557 04:25:13 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK4 00:27:58.557 04:25:13 -- common/autotest_common.sh@1198 -- # local i=0 00:27:58.557 04:25:13 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:27:58.557 04:25:13 -- common/autotest_common.sh@1199 -- # grep -q -w SPDK4 00:27:58.557 04:25:13 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:27:58.557 04:25:13 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK4 00:27:58.557 04:25:13 -- common/autotest_common.sh@1210 -- # return 0 00:27:58.557 04:25:13 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:27:58.557 04:25:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:58.557 04:25:13 -- common/autotest_common.sh@10 -- # set +x 00:27:58.557 04:25:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:58.558 04:25:13 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:27:58.558 04:25:13 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode5 00:27:59.132 NQN:nqn.2016-06.io.spdk:cnode5 disconnected 1 controller(s) 00:27:59.132 04:25:13 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK5 00:27:59.132 04:25:13 -- common/autotest_common.sh@1198 -- # local i=0 00:27:59.132 04:25:13 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:27:59.132 04:25:13 -- common/autotest_common.sh@1199 -- # grep -q -w SPDK5 00:27:59.132 04:25:13 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK5 00:27:59.132 04:25:13 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:27:59.132 04:25:13 -- common/autotest_common.sh@1210 -- # return 0 00:27:59.132 04:25:13 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode5 00:27:59.132 04:25:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:59.132 04:25:13 -- common/autotest_common.sh@10 -- # set +x 00:27:59.132 04:25:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:59.132 04:25:13 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:27:59.132 04:25:13 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode6 00:27:59.132 NQN:nqn.2016-06.io.spdk:cnode6 disconnected 1 controller(s) 00:27:59.132 04:25:13 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK6 00:27:59.132 04:25:13 -- common/autotest_common.sh@1198 -- # local i=0 00:27:59.132 04:25:13 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:27:59.132 04:25:13 -- common/autotest_common.sh@1199 -- # grep -q -w SPDK6 00:27:59.132 04:25:13 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:27:59.132 04:25:13 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK6 00:27:59.132 04:25:13 -- common/autotest_common.sh@1210 -- # return 0 00:27:59.132 04:25:13 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode6 00:27:59.132 04:25:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:59.132 04:25:13 -- common/autotest_common.sh@10 -- # set +x 00:27:59.390 04:25:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:59.390 04:25:13 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:27:59.390 04:25:13 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode7 00:27:59.648 NQN:nqn.2016-06.io.spdk:cnode7 disconnected 1 controller(s) 00:27:59.648 04:25:14 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK7 00:27:59.648 04:25:14 -- common/autotest_common.sh@1198 -- # local i=0 00:27:59.648 04:25:14 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:27:59.648 04:25:14 -- common/autotest_common.sh@1199 -- # grep -q -w SPDK7 00:27:59.648 04:25:14 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:27:59.649 04:25:14 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK7 00:27:59.649 04:25:14 -- common/autotest_common.sh@1210 -- # return 0 00:27:59.649 04:25:14 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode7 00:27:59.649 04:25:14 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:59.649 04:25:14 -- common/autotest_common.sh@10 -- # set +x 00:27:59.649 04:25:14 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:59.649 04:25:14 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:27:59.649 04:25:14 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode8 00:27:59.907 NQN:nqn.2016-06.io.spdk:cnode8 disconnected 1 controller(s) 00:27:59.907 04:25:14 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK8 00:27:59.907 04:25:14 -- common/autotest_common.sh@1198 -- # local i=0 00:27:59.907 04:25:14 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:27:59.907 04:25:14 -- common/autotest_common.sh@1199 -- # grep -q -w SPDK8 00:27:59.907 04:25:14 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK8 00:27:59.907 04:25:14 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:27:59.907 04:25:14 -- common/autotest_common.sh@1210 -- # return 0 00:27:59.907 04:25:14 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode8 00:27:59.907 04:25:14 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:59.907 04:25:14 -- common/autotest_common.sh@10 -- # set +x 00:27:59.907 04:25:14 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:59.907 04:25:14 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:27:59.907 04:25:14 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode9 00:28:00.165 NQN:nqn.2016-06.io.spdk:cnode9 disconnected 1 controller(s) 00:28:00.165 04:25:14 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK9 00:28:00.165 04:25:14 -- common/autotest_common.sh@1198 -- # local i=0 00:28:00.165 04:25:14 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:28:00.165 04:25:14 -- common/autotest_common.sh@1199 -- # grep -q -w SPDK9 00:28:00.165 04:25:14 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK9 00:28:00.165 04:25:14 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:28:00.165 04:25:14 -- common/autotest_common.sh@1210 -- # return 0 00:28:00.165 04:25:14 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode9 00:28:00.165 04:25:14 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:00.165 04:25:14 -- common/autotest_common.sh@10 -- # set +x 00:28:00.165 04:25:14 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:00.165 04:25:14 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:28:00.165 04:25:14 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode10 00:28:00.424 NQN:nqn.2016-06.io.spdk:cnode10 disconnected 1 controller(s) 00:28:00.424 04:25:14 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK10 00:28:00.424 04:25:14 -- common/autotest_common.sh@1198 -- # local i=0 00:28:00.424 04:25:14 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:28:00.424 04:25:14 -- common/autotest_common.sh@1199 -- # grep -q -w SPDK10 00:28:00.424 04:25:14 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:28:00.424 04:25:14 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK10 00:28:00.424 04:25:14 -- common/autotest_common.sh@1210 -- # return 0 00:28:00.424 04:25:14 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode10 00:28:00.424 04:25:14 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:00.424 04:25:14 -- common/autotest_common.sh@10 -- # set +x 00:28:00.424 04:25:14 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:00.424 04:25:14 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:28:00.424 04:25:14 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode11 00:28:00.684 NQN:nqn.2016-06.io.spdk:cnode11 disconnected 1 controller(s) 00:28:00.684 04:25:15 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK11 00:28:00.684 04:25:15 -- common/autotest_common.sh@1198 -- # local i=0 00:28:00.684 04:25:15 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:28:00.684 04:25:15 -- common/autotest_common.sh@1199 -- # grep -q -w SPDK11 00:28:00.684 04:25:15 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:28:00.684 04:25:15 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK11 00:28:00.684 04:25:15 -- common/autotest_common.sh@1210 -- # return 0 00:28:00.684 04:25:15 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode11 00:28:00.684 04:25:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:00.685 04:25:15 -- common/autotest_common.sh@10 -- # set +x 00:28:00.685 04:25:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:00.685 04:25:15 -- target/multiconnection.sh@43 -- # rm -f ./local-job0-0-verify.state 00:28:00.685 04:25:15 -- target/multiconnection.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:28:00.685 04:25:15 -- target/multiconnection.sh@47 -- # nvmftestfini 00:28:00.685 04:25:15 -- nvmf/common.sh@476 -- # nvmfcleanup 00:28:00.685 04:25:15 -- nvmf/common.sh@116 -- # sync 00:28:00.685 04:25:15 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:28:00.685 04:25:15 -- nvmf/common.sh@119 -- # set +e 00:28:00.685 04:25:15 -- nvmf/common.sh@120 -- # for i in {1..20} 00:28:00.685 04:25:15 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:28:00.685 rmmod nvme_tcp 00:28:00.685 rmmod nvme_fabrics 00:28:00.685 rmmod nvme_keyring 00:28:00.685 04:25:15 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:28:00.685 04:25:15 -- nvmf/common.sh@123 -- # set -e 00:28:00.685 04:25:15 -- nvmf/common.sh@124 -- # return 0 00:28:00.685 04:25:15 -- nvmf/common.sh@477 -- # '[' -n 4132297 ']' 00:28:00.685 04:25:15 -- nvmf/common.sh@478 -- # killprocess 4132297 00:28:00.685 04:25:15 -- common/autotest_common.sh@926 -- # '[' -z 4132297 ']' 00:28:00.685 04:25:15 -- common/autotest_common.sh@930 -- # kill -0 4132297 00:28:00.685 04:25:15 -- common/autotest_common.sh@931 -- # uname 00:28:00.685 04:25:15 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:28:00.685 04:25:15 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 4132297 00:28:00.685 04:25:15 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:28:00.685 04:25:15 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:28:00.685 04:25:15 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 4132297' 00:28:00.685 killing process with pid 4132297 00:28:00.685 04:25:15 -- common/autotest_common.sh@945 -- # kill 4132297 00:28:00.685 04:25:15 -- common/autotest_common.sh@950 -- # wait 4132297 00:28:02.066 04:25:16 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:28:02.066 04:25:16 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:28:02.066 04:25:16 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:28:02.066 04:25:16 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:28:02.066 04:25:16 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:28:02.066 04:25:16 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:02.066 04:25:16 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:02.066 04:25:16 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:03.972 04:25:18 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:28:03.972 00:28:03.972 real 1m16.561s 00:28:03.972 user 4m56.721s 00:28:03.972 sys 0m17.556s 00:28:03.972 04:25:18 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:03.972 04:25:18 -- common/autotest_common.sh@10 -- # set +x 00:28:03.972 ************************************ 00:28:03.972 END TEST nvmf_multiconnection 00:28:03.972 ************************************ 00:28:03.972 04:25:18 -- nvmf/nvmf.sh@66 -- # run_test nvmf_initiator_timeout /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/initiator_timeout.sh --transport=tcp 00:28:03.972 04:25:18 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:28:03.972 04:25:18 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:28:03.972 04:25:18 -- common/autotest_common.sh@10 -- # set +x 00:28:03.972 ************************************ 00:28:03.972 START TEST nvmf_initiator_timeout 00:28:03.972 ************************************ 00:28:03.972 04:25:18 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/initiator_timeout.sh --transport=tcp 00:28:03.972 * Looking for test storage... 00:28:03.972 * Found test storage at /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target 00:28:03.972 04:25:18 -- target/initiator_timeout.sh@9 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/common.sh 00:28:03.972 04:25:18 -- nvmf/common.sh@7 -- # uname -s 00:28:03.972 04:25:18 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:03.972 04:25:18 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:03.972 04:25:18 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:03.972 04:25:18 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:03.972 04:25:18 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:03.972 04:25:18 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:03.972 04:25:18 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:03.972 04:25:18 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:03.972 04:25:18 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:03.972 04:25:18 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:03.972 04:25:18 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80ef6226-405e-ee11-906e-a4bf01973fda 00:28:03.972 04:25:18 -- nvmf/common.sh@18 -- # NVME_HOSTID=80ef6226-405e-ee11-906e-a4bf01973fda 00:28:03.972 04:25:18 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:03.972 04:25:18 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:03.972 04:25:18 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:28:03.972 04:25:18 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/common.sh 00:28:03.972 04:25:18 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:03.972 04:25:18 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:03.972 04:25:18 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:03.972 04:25:18 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:03.973 04:25:18 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:03.973 04:25:18 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:03.973 04:25:18 -- paths/export.sh@5 -- # export PATH 00:28:03.973 04:25:18 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:03.973 04:25:18 -- nvmf/common.sh@46 -- # : 0 00:28:03.973 04:25:18 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:28:03.973 04:25:18 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:28:03.973 04:25:18 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:28:03.973 04:25:18 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:03.973 04:25:18 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:03.973 04:25:18 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:28:03.973 04:25:18 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:28:03.973 04:25:18 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:28:03.973 04:25:18 -- target/initiator_timeout.sh@11 -- # MALLOC_BDEV_SIZE=64 00:28:03.973 04:25:18 -- target/initiator_timeout.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:28:03.973 04:25:18 -- target/initiator_timeout.sh@14 -- # nvmftestinit 00:28:03.973 04:25:18 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:28:03.973 04:25:18 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:03.973 04:25:18 -- nvmf/common.sh@436 -- # prepare_net_devs 00:28:03.973 04:25:18 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:28:03.973 04:25:18 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:28:03.973 04:25:18 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:03.973 04:25:18 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:03.973 04:25:18 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:04.231 04:25:18 -- nvmf/common.sh@402 -- # [[ phy-fallback != virt ]] 00:28:04.231 04:25:18 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:28:04.231 04:25:18 -- nvmf/common.sh@284 -- # xtrace_disable 00:28:04.231 04:25:18 -- common/autotest_common.sh@10 -- # set +x 00:28:10.845 04:25:24 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:28:10.845 04:25:24 -- nvmf/common.sh@290 -- # pci_devs=() 00:28:10.845 04:25:24 -- nvmf/common.sh@290 -- # local -a pci_devs 00:28:10.845 04:25:24 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:28:10.845 04:25:24 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:28:10.845 04:25:24 -- nvmf/common.sh@292 -- # pci_drivers=() 00:28:10.845 04:25:24 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:28:10.845 04:25:24 -- nvmf/common.sh@294 -- # net_devs=() 00:28:10.845 04:25:24 -- nvmf/common.sh@294 -- # local -ga net_devs 00:28:10.845 04:25:24 -- nvmf/common.sh@295 -- # e810=() 00:28:10.845 04:25:24 -- nvmf/common.sh@295 -- # local -ga e810 00:28:10.846 04:25:24 -- nvmf/common.sh@296 -- # x722=() 00:28:10.846 04:25:24 -- nvmf/common.sh@296 -- # local -ga x722 00:28:10.846 04:25:24 -- nvmf/common.sh@297 -- # mlx=() 00:28:10.846 04:25:24 -- nvmf/common.sh@297 -- # local -ga mlx 00:28:10.846 04:25:24 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:10.846 04:25:24 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:10.846 04:25:24 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:10.846 04:25:24 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:10.846 04:25:24 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:10.846 04:25:24 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:10.846 04:25:24 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:10.846 04:25:24 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:10.846 04:25:24 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:10.846 04:25:24 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:10.846 04:25:24 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:10.846 04:25:24 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:28:10.846 04:25:24 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:28:10.846 04:25:24 -- nvmf/common.sh@326 -- # [[ '' == mlx5 ]] 00:28:10.846 04:25:24 -- nvmf/common.sh@328 -- # [[ '' == e810 ]] 00:28:10.846 04:25:24 -- nvmf/common.sh@330 -- # [[ '' == x722 ]] 00:28:10.846 04:25:24 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:28:10.846 04:25:24 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:28:10.846 04:25:24 -- nvmf/common.sh@340 -- # echo 'Found 0000:27:00.0 (0x8086 - 0x159b)' 00:28:10.846 Found 0000:27:00.0 (0x8086 - 0x159b) 00:28:10.846 04:25:24 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:28:10.846 04:25:24 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:28:10.846 04:25:24 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:10.846 04:25:24 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:10.846 04:25:24 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:28:10.846 04:25:24 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:28:10.846 04:25:24 -- nvmf/common.sh@340 -- # echo 'Found 0000:27:00.1 (0x8086 - 0x159b)' 00:28:10.846 Found 0000:27:00.1 (0x8086 - 0x159b) 00:28:10.846 04:25:24 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:28:10.846 04:25:24 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:28:10.846 04:25:24 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:10.846 04:25:24 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:10.846 04:25:24 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:28:10.846 04:25:24 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:28:10.846 04:25:24 -- nvmf/common.sh@371 -- # [[ '' == e810 ]] 00:28:10.846 04:25:24 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:28:10.846 04:25:24 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:10.846 04:25:24 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:28:10.846 04:25:24 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:10.846 04:25:24 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:27:00.0: cvl_0_0' 00:28:10.846 Found net devices under 0000:27:00.0: cvl_0_0 00:28:10.846 04:25:24 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:28:10.846 04:25:24 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:28:10.846 04:25:24 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:10.846 04:25:24 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:28:10.846 04:25:24 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:10.846 04:25:24 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:27:00.1: cvl_0_1' 00:28:10.846 Found net devices under 0000:27:00.1: cvl_0_1 00:28:10.846 04:25:24 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:28:10.846 04:25:24 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:28:10.846 04:25:24 -- nvmf/common.sh@402 -- # is_hw=yes 00:28:10.846 04:25:24 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:28:10.846 04:25:24 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:28:10.846 04:25:24 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:28:10.846 04:25:24 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:10.846 04:25:24 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:10.846 04:25:24 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:10.846 04:25:24 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:28:10.846 04:25:24 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:10.846 04:25:24 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:10.846 04:25:24 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:28:10.846 04:25:24 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:10.846 04:25:24 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:10.846 04:25:24 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:28:10.846 04:25:24 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:28:10.846 04:25:24 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:28:10.846 04:25:24 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:10.846 04:25:24 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:10.846 04:25:24 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:10.846 04:25:24 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:28:10.846 04:25:24 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:10.846 04:25:24 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:10.846 04:25:24 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:10.846 04:25:24 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:28:10.846 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:10.846 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.343 ms 00:28:10.846 00:28:10.846 --- 10.0.0.2 ping statistics --- 00:28:10.846 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:10.846 rtt min/avg/max/mdev = 0.343/0.343/0.343/0.000 ms 00:28:10.846 04:25:24 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:10.846 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:10.846 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.091 ms 00:28:10.846 00:28:10.846 --- 10.0.0.1 ping statistics --- 00:28:10.846 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:10.846 rtt min/avg/max/mdev = 0.091/0.091/0.091/0.000 ms 00:28:10.846 04:25:24 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:10.846 04:25:24 -- nvmf/common.sh@410 -- # return 0 00:28:10.846 04:25:24 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:28:10.846 04:25:24 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:10.846 04:25:24 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:28:10.846 04:25:24 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:28:10.846 04:25:24 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:10.846 04:25:24 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:28:10.846 04:25:24 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:28:10.846 04:25:24 -- target/initiator_timeout.sh@15 -- # nvmfappstart -m 0xF 00:28:10.846 04:25:24 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:28:10.846 04:25:24 -- common/autotest_common.sh@712 -- # xtrace_disable 00:28:10.846 04:25:24 -- common/autotest_common.sh@10 -- # set +x 00:28:10.846 04:25:24 -- nvmf/common.sh@469 -- # nvmfpid=4149654 00:28:10.846 04:25:24 -- nvmf/common.sh@470 -- # waitforlisten 4149654 00:28:10.846 04:25:24 -- common/autotest_common.sh@819 -- # '[' -z 4149654 ']' 00:28:10.846 04:25:24 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:10.846 04:25:24 -- common/autotest_common.sh@824 -- # local max_retries=100 00:28:10.846 04:25:24 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:10.846 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:10.846 04:25:24 -- common/autotest_common.sh@828 -- # xtrace_disable 00:28:10.846 04:25:24 -- common/autotest_common.sh@10 -- # set +x 00:28:10.846 04:25:24 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:28:10.846 [2024-05-14 04:25:24.531904] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:28:10.846 [2024-05-14 04:25:24.532012] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:10.846 EAL: No free 2048 kB hugepages reported on node 1 00:28:10.846 [2024-05-14 04:25:24.653388] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:28:10.846 [2024-05-14 04:25:24.746608] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:28:10.846 [2024-05-14 04:25:24.746784] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:10.846 [2024-05-14 04:25:24.746797] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:10.846 [2024-05-14 04:25:24.746807] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:10.846 [2024-05-14 04:25:24.746884] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:28:10.846 [2024-05-14 04:25:24.747014] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:28:10.846 [2024-05-14 04:25:24.747113] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:10.846 [2024-05-14 04:25:24.747124] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:28:10.846 04:25:25 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:28:10.846 04:25:25 -- common/autotest_common.sh@852 -- # return 0 00:28:10.846 04:25:25 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:28:10.846 04:25:25 -- common/autotest_common.sh@718 -- # xtrace_disable 00:28:10.846 04:25:25 -- common/autotest_common.sh@10 -- # set +x 00:28:10.846 04:25:25 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:10.846 04:25:25 -- target/initiator_timeout.sh@17 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $nvmfpid; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:28:10.846 04:25:25 -- target/initiator_timeout.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:28:10.846 04:25:25 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:10.846 04:25:25 -- common/autotest_common.sh@10 -- # set +x 00:28:10.846 Malloc0 00:28:10.846 04:25:25 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:10.846 04:25:25 -- target/initiator_timeout.sh@22 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 30 -t 30 -w 30 -n 30 00:28:10.846 04:25:25 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:10.846 04:25:25 -- common/autotest_common.sh@10 -- # set +x 00:28:10.846 Delay0 00:28:10.846 04:25:25 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:10.846 04:25:25 -- target/initiator_timeout.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:28:10.846 04:25:25 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:10.846 04:25:25 -- common/autotest_common.sh@10 -- # set +x 00:28:10.846 [2024-05-14 04:25:25.317616] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:10.846 04:25:25 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:10.846 04:25:25 -- target/initiator_timeout.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:28:10.846 04:25:25 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:10.846 04:25:25 -- common/autotest_common.sh@10 -- # set +x 00:28:10.846 04:25:25 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:10.847 04:25:25 -- target/initiator_timeout.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:10.847 04:25:25 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:10.847 04:25:25 -- common/autotest_common.sh@10 -- # set +x 00:28:10.847 04:25:25 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:10.847 04:25:25 -- target/initiator_timeout.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:10.847 04:25:25 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:10.847 04:25:25 -- common/autotest_common.sh@10 -- # set +x 00:28:10.847 [2024-05-14 04:25:25.345816] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:10.847 04:25:25 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:10.847 04:25:25 -- target/initiator_timeout.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80ef6226-405e-ee11-906e-a4bf01973fda --hostid=80ef6226-405e-ee11-906e-a4bf01973fda -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:28:12.757 04:25:26 -- target/initiator_timeout.sh@31 -- # waitforserial SPDKISFASTANDAWESOME 00:28:12.757 04:25:26 -- common/autotest_common.sh@1177 -- # local i=0 00:28:12.757 04:25:26 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:28:12.757 04:25:26 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:28:12.757 04:25:26 -- common/autotest_common.sh@1184 -- # sleep 2 00:28:14.664 04:25:28 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:28:14.664 04:25:28 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:28:14.664 04:25:28 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:28:14.664 04:25:28 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:28:14.664 04:25:28 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:28:14.664 04:25:28 -- common/autotest_common.sh@1187 -- # return 0 00:28:14.664 04:25:28 -- target/initiator_timeout.sh@35 -- # fio_pid=4150369 00:28:14.664 04:25:28 -- target/initiator_timeout.sh@37 -- # sleep 3 00:28:14.664 04:25:28 -- target/initiator_timeout.sh@34 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 60 -v 00:28:14.664 [global] 00:28:14.664 thread=1 00:28:14.664 invalidate=1 00:28:14.664 rw=write 00:28:14.664 time_based=1 00:28:14.664 runtime=60 00:28:14.664 ioengine=libaio 00:28:14.664 direct=1 00:28:14.664 bs=4096 00:28:14.664 iodepth=1 00:28:14.664 norandommap=0 00:28:14.664 numjobs=1 00:28:14.664 00:28:14.664 verify_dump=1 00:28:14.664 verify_backlog=512 00:28:14.664 verify_state_save=0 00:28:14.664 do_verify=1 00:28:14.664 verify=crc32c-intel 00:28:14.664 [job0] 00:28:14.664 filename=/dev/nvme0n1 00:28:14.664 Could not set queue depth (nvme0n1) 00:28:14.664 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:28:14.664 fio-3.35 00:28:14.664 Starting 1 thread 00:28:17.953 04:25:31 -- target/initiator_timeout.sh@40 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_read 31000000 00:28:17.953 04:25:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:17.953 04:25:31 -- common/autotest_common.sh@10 -- # set +x 00:28:17.953 true 00:28:17.953 04:25:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:17.953 04:25:31 -- target/initiator_timeout.sh@41 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_write 31000000 00:28:17.953 04:25:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:17.953 04:25:31 -- common/autotest_common.sh@10 -- # set +x 00:28:17.953 true 00:28:17.953 04:25:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:17.953 04:25:31 -- target/initiator_timeout.sh@42 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_read 31000000 00:28:17.953 04:25:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:17.953 04:25:31 -- common/autotest_common.sh@10 -- # set +x 00:28:17.953 true 00:28:17.953 04:25:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:17.953 04:25:31 -- target/initiator_timeout.sh@43 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_write 310000000 00:28:17.953 04:25:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:17.953 04:25:31 -- common/autotest_common.sh@10 -- # set +x 00:28:17.953 true 00:28:17.953 04:25:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:17.953 04:25:31 -- target/initiator_timeout.sh@45 -- # sleep 3 00:28:20.486 04:25:34 -- target/initiator_timeout.sh@48 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_read 30 00:28:20.486 04:25:34 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:20.486 04:25:34 -- common/autotest_common.sh@10 -- # set +x 00:28:20.486 true 00:28:20.486 04:25:34 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:20.486 04:25:34 -- target/initiator_timeout.sh@49 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_write 30 00:28:20.486 04:25:34 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:20.486 04:25:34 -- common/autotest_common.sh@10 -- # set +x 00:28:20.486 true 00:28:20.486 04:25:34 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:20.486 04:25:34 -- target/initiator_timeout.sh@50 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_read 30 00:28:20.486 04:25:34 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:20.486 04:25:34 -- common/autotest_common.sh@10 -- # set +x 00:28:20.486 true 00:28:20.486 04:25:34 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:20.486 04:25:34 -- target/initiator_timeout.sh@51 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_write 30 00:28:20.486 04:25:34 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:20.486 04:25:34 -- common/autotest_common.sh@10 -- # set +x 00:28:20.486 true 00:28:20.486 04:25:34 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:20.486 04:25:34 -- target/initiator_timeout.sh@53 -- # fio_status=0 00:28:20.486 04:25:34 -- target/initiator_timeout.sh@54 -- # wait 4150369 00:29:16.721 00:29:16.721 job0: (groupid=0, jobs=1): err= 0: pid=4150738: Tue May 14 04:26:29 2024 00:29:16.721 read: IOPS=247, BW=989KiB/s (1013kB/s)(57.9MiB/60001msec) 00:29:16.721 slat (usec): min=3, max=9622, avg=10.39, stdev=79.27 00:29:16.721 clat (usec): min=204, max=41733k, avg=3797.16, stdev=342669.46 00:29:16.721 lat (usec): min=231, max=41733k, avg=3807.56, stdev=342669.55 00:29:16.721 clat percentiles (usec): 00:29:16.721 | 1.00th=[ 245], 5.00th=[ 258], 10.00th=[ 262], 20.00th=[ 269], 00:29:16.721 | 30.00th=[ 273], 40.00th=[ 281], 50.00th=[ 289], 60.00th=[ 310], 00:29:16.721 | 70.00th=[ 343], 80.00th=[ 383], 90.00th=[ 441], 95.00th=[ 469], 00:29:16.721 | 99.00th=[41157], 99.50th=[41157], 99.90th=[42206], 99.95th=[42206], 00:29:16.721 | 99.99th=[42730] 00:29:16.721 write: IOPS=247, BW=990KiB/s (1014kB/s)(58.0MiB/60001msec); 0 zone resets 00:29:16.721 slat (usec): min=5, max=30452, avg=13.18, stdev=249.97 00:29:16.721 clat (usec): min=145, max=596, avg=217.53, stdev=50.33 00:29:16.721 lat (usec): min=152, max=31049, avg=230.71, stdev=259.03 00:29:16.721 clat percentiles (usec): 00:29:16.721 | 1.00th=[ 163], 5.00th=[ 169], 10.00th=[ 174], 20.00th=[ 178], 00:29:16.721 | 30.00th=[ 182], 40.00th=[ 188], 50.00th=[ 202], 60.00th=[ 219], 00:29:16.721 | 70.00th=[ 229], 80.00th=[ 245], 90.00th=[ 306], 95.00th=[ 330], 00:29:16.721 | 99.00th=[ 363], 99.50th=[ 383], 99.90th=[ 433], 99.95th=[ 453], 00:29:16.721 | 99.99th=[ 553] 00:29:16.721 bw ( KiB/s): min= 288, max= 8400, per=100.00%, avg=6251.79, stdev=2318.42, samples=19 00:29:16.721 iops : min= 72, max= 2100, avg=1562.95, stdev=579.61, samples=19 00:29:16.721 lat (usec) : 250=42.44%, 500=56.30%, 750=0.41%, 1000=0.03% 00:29:16.721 lat (msec) : 2=0.01%, 50=0.81%, >=2000=0.01% 00:29:16.721 cpu : usr=0.27%, sys=0.54%, ctx=29687, majf=0, minf=1 00:29:16.721 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:29:16.721 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:16.721 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:16.721 issued rwts: total=14835,14848,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:16.721 latency : target=0, window=0, percentile=100.00%, depth=1 00:29:16.721 00:29:16.721 Run status group 0 (all jobs): 00:29:16.721 READ: bw=989KiB/s (1013kB/s), 989KiB/s-989KiB/s (1013kB/s-1013kB/s), io=57.9MiB (60.8MB), run=60001-60001msec 00:29:16.721 WRITE: bw=990KiB/s (1014kB/s), 990KiB/s-990KiB/s (1014kB/s-1014kB/s), io=58.0MiB (60.8MB), run=60001-60001msec 00:29:16.721 00:29:16.721 Disk stats (read/write): 00:29:16.721 nvme0n1: ios=14884/14848, merge=0/0, ticks=15659/3168, in_queue=18827, util=99.86% 00:29:16.721 04:26:29 -- target/initiator_timeout.sh@56 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:29:16.721 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:29:16.721 04:26:29 -- target/initiator_timeout.sh@57 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:29:16.721 04:26:29 -- common/autotest_common.sh@1198 -- # local i=0 00:29:16.721 04:26:29 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:29:16.721 04:26:29 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:29:16.721 04:26:29 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:29:16.721 04:26:29 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:29:16.721 04:26:29 -- common/autotest_common.sh@1210 -- # return 0 00:29:16.721 04:26:29 -- target/initiator_timeout.sh@59 -- # '[' 0 -eq 0 ']' 00:29:16.721 04:26:29 -- target/initiator_timeout.sh@60 -- # echo 'nvmf hotplug test: fio successful as expected' 00:29:16.722 nvmf hotplug test: fio successful as expected 00:29:16.722 04:26:29 -- target/initiator_timeout.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:29:16.722 04:26:29 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:16.722 04:26:29 -- common/autotest_common.sh@10 -- # set +x 00:29:16.722 04:26:29 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:16.722 04:26:29 -- target/initiator_timeout.sh@69 -- # rm -f ./local-job0-0-verify.state 00:29:16.722 04:26:29 -- target/initiator_timeout.sh@71 -- # trap - SIGINT SIGTERM EXIT 00:29:16.722 04:26:29 -- target/initiator_timeout.sh@73 -- # nvmftestfini 00:29:16.722 04:26:29 -- nvmf/common.sh@476 -- # nvmfcleanup 00:29:16.722 04:26:29 -- nvmf/common.sh@116 -- # sync 00:29:16.722 04:26:29 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:29:16.722 04:26:29 -- nvmf/common.sh@119 -- # set +e 00:29:16.722 04:26:29 -- nvmf/common.sh@120 -- # for i in {1..20} 00:29:16.722 04:26:29 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:29:16.722 rmmod nvme_tcp 00:29:16.722 rmmod nvme_fabrics 00:29:16.722 rmmod nvme_keyring 00:29:16.722 04:26:29 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:29:16.722 04:26:29 -- nvmf/common.sh@123 -- # set -e 00:29:16.722 04:26:29 -- nvmf/common.sh@124 -- # return 0 00:29:16.722 04:26:29 -- nvmf/common.sh@477 -- # '[' -n 4149654 ']' 00:29:16.722 04:26:29 -- nvmf/common.sh@478 -- # killprocess 4149654 00:29:16.722 04:26:29 -- common/autotest_common.sh@926 -- # '[' -z 4149654 ']' 00:29:16.722 04:26:29 -- common/autotest_common.sh@930 -- # kill -0 4149654 00:29:16.722 04:26:29 -- common/autotest_common.sh@931 -- # uname 00:29:16.722 04:26:29 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:29:16.722 04:26:29 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 4149654 00:29:16.722 04:26:29 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:29:16.722 04:26:29 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:29:16.722 04:26:29 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 4149654' 00:29:16.722 killing process with pid 4149654 00:29:16.722 04:26:29 -- common/autotest_common.sh@945 -- # kill 4149654 00:29:16.722 04:26:29 -- common/autotest_common.sh@950 -- # wait 4149654 00:29:16.722 04:26:30 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:29:16.722 04:26:30 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:29:16.722 04:26:30 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:29:16.722 04:26:30 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:29:16.722 04:26:30 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:29:16.722 04:26:30 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:16.722 04:26:30 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:29:16.722 04:26:30 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:18.107 04:26:32 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:29:18.107 00:29:18.107 real 1m13.860s 00:29:18.107 user 4m40.598s 00:29:18.107 sys 0m6.164s 00:29:18.107 04:26:32 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:18.107 04:26:32 -- common/autotest_common.sh@10 -- # set +x 00:29:18.107 ************************************ 00:29:18.107 END TEST nvmf_initiator_timeout 00:29:18.107 ************************************ 00:29:18.107 04:26:32 -- nvmf/nvmf.sh@69 -- # [[ phy-fallback == phy ]] 00:29:18.107 04:26:32 -- nvmf/nvmf.sh@85 -- # timing_exit target 00:29:18.107 04:26:32 -- common/autotest_common.sh@718 -- # xtrace_disable 00:29:18.107 04:26:32 -- common/autotest_common.sh@10 -- # set +x 00:29:18.107 04:26:32 -- nvmf/nvmf.sh@87 -- # timing_enter host 00:29:18.107 04:26:32 -- common/autotest_common.sh@712 -- # xtrace_disable 00:29:18.107 04:26:32 -- common/autotest_common.sh@10 -- # set +x 00:29:18.107 04:26:32 -- nvmf/nvmf.sh@89 -- # [[ 0 -eq 0 ]] 00:29:18.107 04:26:32 -- nvmf/nvmf.sh@90 -- # run_test nvmf_multicontroller /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:29:18.107 04:26:32 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:29:18.107 04:26:32 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:29:18.107 04:26:32 -- common/autotest_common.sh@10 -- # set +x 00:29:18.107 ************************************ 00:29:18.107 START TEST nvmf_multicontroller 00:29:18.107 ************************************ 00:29:18.107 04:26:32 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:29:18.107 * Looking for test storage... 00:29:18.107 * Found test storage at /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/host 00:29:18.107 04:26:32 -- host/multicontroller.sh@9 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/common.sh 00:29:18.107 04:26:32 -- nvmf/common.sh@7 -- # uname -s 00:29:18.107 04:26:32 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:18.107 04:26:32 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:18.107 04:26:32 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:18.107 04:26:32 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:18.107 04:26:32 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:18.107 04:26:32 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:18.107 04:26:32 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:18.107 04:26:32 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:18.107 04:26:32 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:18.107 04:26:32 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:18.108 04:26:32 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80ef6226-405e-ee11-906e-a4bf01973fda 00:29:18.108 04:26:32 -- nvmf/common.sh@18 -- # NVME_HOSTID=80ef6226-405e-ee11-906e-a4bf01973fda 00:29:18.108 04:26:32 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:18.108 04:26:32 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:18.108 04:26:32 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:29:18.108 04:26:32 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/common.sh 00:29:18.108 04:26:32 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:18.108 04:26:32 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:18.108 04:26:32 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:18.108 04:26:32 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:18.108 04:26:32 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:18.108 04:26:32 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:18.108 04:26:32 -- paths/export.sh@5 -- # export PATH 00:29:18.108 04:26:32 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:18.108 04:26:32 -- nvmf/common.sh@46 -- # : 0 00:29:18.108 04:26:32 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:29:18.108 04:26:32 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:29:18.108 04:26:32 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:29:18.108 04:26:32 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:18.108 04:26:32 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:18.108 04:26:32 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:29:18.108 04:26:32 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:29:18.108 04:26:32 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:29:18.108 04:26:32 -- host/multicontroller.sh@11 -- # MALLOC_BDEV_SIZE=64 00:29:18.108 04:26:32 -- host/multicontroller.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:29:18.108 04:26:32 -- host/multicontroller.sh@13 -- # NVMF_HOST_FIRST_PORT=60000 00:29:18.108 04:26:32 -- host/multicontroller.sh@14 -- # NVMF_HOST_SECOND_PORT=60001 00:29:18.108 04:26:32 -- host/multicontroller.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:29:18.108 04:26:32 -- host/multicontroller.sh@18 -- # '[' tcp == rdma ']' 00:29:18.108 04:26:32 -- host/multicontroller.sh@23 -- # nvmftestinit 00:29:18.108 04:26:32 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:29:18.108 04:26:32 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:18.108 04:26:32 -- nvmf/common.sh@436 -- # prepare_net_devs 00:29:18.108 04:26:32 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:29:18.108 04:26:32 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:29:18.108 04:26:32 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:18.108 04:26:32 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:29:18.108 04:26:32 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:18.108 04:26:32 -- nvmf/common.sh@402 -- # [[ phy-fallback != virt ]] 00:29:18.108 04:26:32 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:29:18.108 04:26:32 -- nvmf/common.sh@284 -- # xtrace_disable 00:29:18.108 04:26:32 -- common/autotest_common.sh@10 -- # set +x 00:29:24.761 04:26:38 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:29:24.761 04:26:38 -- nvmf/common.sh@290 -- # pci_devs=() 00:29:24.761 04:26:38 -- nvmf/common.sh@290 -- # local -a pci_devs 00:29:24.761 04:26:38 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:29:24.761 04:26:38 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:29:24.761 04:26:38 -- nvmf/common.sh@292 -- # pci_drivers=() 00:29:24.761 04:26:38 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:29:24.761 04:26:38 -- nvmf/common.sh@294 -- # net_devs=() 00:29:24.761 04:26:38 -- nvmf/common.sh@294 -- # local -ga net_devs 00:29:24.761 04:26:38 -- nvmf/common.sh@295 -- # e810=() 00:29:24.761 04:26:38 -- nvmf/common.sh@295 -- # local -ga e810 00:29:24.761 04:26:38 -- nvmf/common.sh@296 -- # x722=() 00:29:24.761 04:26:38 -- nvmf/common.sh@296 -- # local -ga x722 00:29:24.761 04:26:38 -- nvmf/common.sh@297 -- # mlx=() 00:29:24.761 04:26:38 -- nvmf/common.sh@297 -- # local -ga mlx 00:29:24.761 04:26:38 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:24.761 04:26:38 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:24.761 04:26:38 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:24.761 04:26:38 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:24.761 04:26:38 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:24.761 04:26:38 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:24.761 04:26:38 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:24.761 04:26:38 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:24.761 04:26:38 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:24.761 04:26:38 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:24.761 04:26:38 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:24.761 04:26:38 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:29:24.761 04:26:38 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:29:24.761 04:26:38 -- nvmf/common.sh@326 -- # [[ '' == mlx5 ]] 00:29:24.761 04:26:38 -- nvmf/common.sh@328 -- # [[ '' == e810 ]] 00:29:24.761 04:26:38 -- nvmf/common.sh@330 -- # [[ '' == x722 ]] 00:29:24.761 04:26:38 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:29:24.761 04:26:38 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:29:24.762 04:26:38 -- nvmf/common.sh@340 -- # echo 'Found 0000:27:00.0 (0x8086 - 0x159b)' 00:29:24.762 Found 0000:27:00.0 (0x8086 - 0x159b) 00:29:24.762 04:26:38 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:29:24.762 04:26:38 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:29:24.762 04:26:38 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:24.762 04:26:38 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:24.762 04:26:38 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:29:24.762 04:26:38 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:29:24.762 04:26:38 -- nvmf/common.sh@340 -- # echo 'Found 0000:27:00.1 (0x8086 - 0x159b)' 00:29:24.762 Found 0000:27:00.1 (0x8086 - 0x159b) 00:29:24.762 04:26:38 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:29:24.762 04:26:38 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:29:24.762 04:26:38 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:24.762 04:26:38 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:24.762 04:26:38 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:29:24.762 04:26:38 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:29:24.762 04:26:38 -- nvmf/common.sh@371 -- # [[ '' == e810 ]] 00:29:24.762 04:26:38 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:29:24.762 04:26:38 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:24.762 04:26:38 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:29:24.762 04:26:38 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:24.762 04:26:38 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:27:00.0: cvl_0_0' 00:29:24.762 Found net devices under 0000:27:00.0: cvl_0_0 00:29:24.762 04:26:38 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:29:24.762 04:26:38 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:29:24.762 04:26:38 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:24.762 04:26:38 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:29:24.762 04:26:38 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:24.762 04:26:38 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:27:00.1: cvl_0_1' 00:29:24.762 Found net devices under 0000:27:00.1: cvl_0_1 00:29:24.762 04:26:38 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:29:24.762 04:26:38 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:29:24.762 04:26:38 -- nvmf/common.sh@402 -- # is_hw=yes 00:29:24.762 04:26:38 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:29:24.762 04:26:38 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:29:24.762 04:26:38 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:29:24.762 04:26:38 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:24.762 04:26:38 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:24.762 04:26:38 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:24.762 04:26:38 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:29:24.762 04:26:38 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:24.762 04:26:38 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:24.762 04:26:38 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:29:24.762 04:26:38 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:24.762 04:26:38 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:24.762 04:26:38 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:29:24.762 04:26:38 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:29:24.762 04:26:38 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:29:24.762 04:26:38 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:24.762 04:26:38 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:24.762 04:26:38 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:24.762 04:26:38 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:29:24.762 04:26:38 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:24.762 04:26:38 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:24.762 04:26:38 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:24.762 04:26:38 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:29:24.762 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:24.762 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.189 ms 00:29:24.762 00:29:24.762 --- 10.0.0.2 ping statistics --- 00:29:24.762 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:24.762 rtt min/avg/max/mdev = 0.189/0.189/0.189/0.000 ms 00:29:24.762 04:26:38 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:24.762 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:24.762 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.092 ms 00:29:24.762 00:29:24.762 --- 10.0.0.1 ping statistics --- 00:29:24.762 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:24.762 rtt min/avg/max/mdev = 0.092/0.092/0.092/0.000 ms 00:29:24.762 04:26:38 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:24.762 04:26:38 -- nvmf/common.sh@410 -- # return 0 00:29:24.762 04:26:38 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:29:24.762 04:26:38 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:24.762 04:26:38 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:29:24.762 04:26:38 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:29:24.762 04:26:38 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:24.762 04:26:38 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:29:24.762 04:26:38 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:29:24.762 04:26:38 -- host/multicontroller.sh@25 -- # nvmfappstart -m 0xE 00:29:24.762 04:26:38 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:29:24.762 04:26:38 -- common/autotest_common.sh@712 -- # xtrace_disable 00:29:24.762 04:26:38 -- common/autotest_common.sh@10 -- # set +x 00:29:24.762 04:26:38 -- nvmf/common.sh@469 -- # nvmfpid=4166228 00:29:24.762 04:26:38 -- nvmf/common.sh@470 -- # waitforlisten 4166228 00:29:24.762 04:26:38 -- common/autotest_common.sh@819 -- # '[' -z 4166228 ']' 00:29:24.762 04:26:38 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:24.762 04:26:38 -- common/autotest_common.sh@824 -- # local max_retries=100 00:29:24.762 04:26:38 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:24.762 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:24.762 04:26:38 -- common/autotest_common.sh@828 -- # xtrace_disable 00:29:24.762 04:26:38 -- common/autotest_common.sh@10 -- # set +x 00:29:24.762 04:26:38 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:29:24.762 [2024-05-14 04:26:38.386537] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:29:24.762 [2024-05-14 04:26:38.386637] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:24.762 EAL: No free 2048 kB hugepages reported on node 1 00:29:24.762 [2024-05-14 04:26:38.505583] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:29:24.762 [2024-05-14 04:26:38.604826] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:29:24.762 [2024-05-14 04:26:38.604998] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:24.762 [2024-05-14 04:26:38.605012] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:24.762 [2024-05-14 04:26:38.605022] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:24.762 [2024-05-14 04:26:38.605166] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:29:24.762 [2024-05-14 04:26:38.605278] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:29:24.762 [2024-05-14 04:26:38.605288] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:29:24.762 04:26:39 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:29:24.762 04:26:39 -- common/autotest_common.sh@852 -- # return 0 00:29:24.762 04:26:39 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:29:24.762 04:26:39 -- common/autotest_common.sh@718 -- # xtrace_disable 00:29:24.762 04:26:39 -- common/autotest_common.sh@10 -- # set +x 00:29:24.762 04:26:39 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:24.762 04:26:39 -- host/multicontroller.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:29:24.762 04:26:39 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:24.762 04:26:39 -- common/autotest_common.sh@10 -- # set +x 00:29:24.762 [2024-05-14 04:26:39.137382] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:24.762 04:26:39 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:24.762 04:26:39 -- host/multicontroller.sh@29 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:29:24.762 04:26:39 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:24.762 04:26:39 -- common/autotest_common.sh@10 -- # set +x 00:29:24.762 Malloc0 00:29:24.762 04:26:39 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:24.762 04:26:39 -- host/multicontroller.sh@30 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:29:24.762 04:26:39 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:24.762 04:26:39 -- common/autotest_common.sh@10 -- # set +x 00:29:24.762 04:26:39 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:24.762 04:26:39 -- host/multicontroller.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:29:24.762 04:26:39 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:24.762 04:26:39 -- common/autotest_common.sh@10 -- # set +x 00:29:24.762 04:26:39 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:24.762 04:26:39 -- host/multicontroller.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:24.762 04:26:39 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:24.762 04:26:39 -- common/autotest_common.sh@10 -- # set +x 00:29:24.762 [2024-05-14 04:26:39.221454] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:24.762 04:26:39 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:24.762 04:26:39 -- host/multicontroller.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:29:24.762 04:26:39 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:24.762 04:26:39 -- common/autotest_common.sh@10 -- # set +x 00:29:24.762 [2024-05-14 04:26:39.229336] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:29:24.762 04:26:39 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:24.762 04:26:39 -- host/multicontroller.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:29:24.762 04:26:39 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:24.762 04:26:39 -- common/autotest_common.sh@10 -- # set +x 00:29:24.762 Malloc1 00:29:24.762 04:26:39 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:24.762 04:26:39 -- host/multicontroller.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:29:24.762 04:26:39 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:24.762 04:26:39 -- common/autotest_common.sh@10 -- # set +x 00:29:24.762 04:26:39 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:24.762 04:26:39 -- host/multicontroller.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc1 00:29:24.762 04:26:39 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:24.762 04:26:39 -- common/autotest_common.sh@10 -- # set +x 00:29:24.762 04:26:39 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:24.763 04:26:39 -- host/multicontroller.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:29:24.763 04:26:39 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:24.763 04:26:39 -- common/autotest_common.sh@10 -- # set +x 00:29:24.763 04:26:39 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:24.763 04:26:39 -- host/multicontroller.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4421 00:29:24.763 04:26:39 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:24.763 04:26:39 -- common/autotest_common.sh@10 -- # set +x 00:29:24.763 04:26:39 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:24.763 04:26:39 -- host/multicontroller.sh@44 -- # bdevperf_pid=4166543 00:29:24.763 04:26:39 -- host/multicontroller.sh@46 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; pap "$testdir/try.txt"; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:29:24.763 04:26:39 -- host/multicontroller.sh@47 -- # waitforlisten 4166543 /var/tmp/bdevperf.sock 00:29:24.763 04:26:39 -- common/autotest_common.sh@819 -- # '[' -z 4166543 ']' 00:29:24.763 04:26:39 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:29:24.763 04:26:39 -- common/autotest_common.sh@824 -- # local max_retries=100 00:29:24.763 04:26:39 -- host/multicontroller.sh@43 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w write -t 1 -f 00:29:24.763 04:26:39 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:29:24.763 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:29:24.763 04:26:39 -- common/autotest_common.sh@828 -- # xtrace_disable 00:29:24.763 04:26:39 -- common/autotest_common.sh@10 -- # set +x 00:29:25.695 04:26:40 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:29:25.695 04:26:40 -- common/autotest_common.sh@852 -- # return 0 00:29:25.695 04:26:40 -- host/multicontroller.sh@50 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:29:25.695 04:26:40 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:25.695 04:26:40 -- common/autotest_common.sh@10 -- # set +x 00:29:25.954 NVMe0n1 00:29:25.954 04:26:40 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:25.954 04:26:40 -- host/multicontroller.sh@54 -- # grep -c NVMe 00:29:25.954 04:26:40 -- host/multicontroller.sh@54 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:29:25.954 04:26:40 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:25.954 04:26:40 -- common/autotest_common.sh@10 -- # set +x 00:29:25.954 04:26:40 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:25.954 1 00:29:25.954 04:26:40 -- host/multicontroller.sh@60 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:29:25.954 04:26:40 -- common/autotest_common.sh@640 -- # local es=0 00:29:25.954 04:26:40 -- common/autotest_common.sh@642 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:29:25.954 04:26:40 -- common/autotest_common.sh@628 -- # local arg=rpc_cmd 00:29:25.954 04:26:40 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:29:25.954 04:26:40 -- common/autotest_common.sh@632 -- # type -t rpc_cmd 00:29:25.954 04:26:40 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:29:25.954 04:26:40 -- common/autotest_common.sh@643 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:29:25.954 04:26:40 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:25.954 04:26:40 -- common/autotest_common.sh@10 -- # set +x 00:29:25.954 request: 00:29:25.954 { 00:29:25.954 "name": "NVMe0", 00:29:25.954 "trtype": "tcp", 00:29:25.954 "traddr": "10.0.0.2", 00:29:25.954 "hostnqn": "nqn.2021-09-7.io.spdk:00001", 00:29:25.954 "hostaddr": "10.0.0.2", 00:29:25.954 "hostsvcid": "60000", 00:29:25.954 "adrfam": "ipv4", 00:29:25.954 "trsvcid": "4420", 00:29:25.954 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:29:25.954 "method": "bdev_nvme_attach_controller", 00:29:25.954 "req_id": 1 00:29:25.954 } 00:29:25.954 Got JSON-RPC error response 00:29:25.954 response: 00:29:25.954 { 00:29:25.954 "code": -114, 00:29:25.954 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:29:25.954 } 00:29:25.954 04:26:40 -- common/autotest_common.sh@579 -- # [[ 1 == 0 ]] 00:29:25.954 04:26:40 -- common/autotest_common.sh@643 -- # es=1 00:29:25.954 04:26:40 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:29:25.954 04:26:40 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:29:25.954 04:26:40 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:29:25.954 04:26:40 -- host/multicontroller.sh@65 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:29:25.954 04:26:40 -- common/autotest_common.sh@640 -- # local es=0 00:29:25.954 04:26:40 -- common/autotest_common.sh@642 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:29:25.954 04:26:40 -- common/autotest_common.sh@628 -- # local arg=rpc_cmd 00:29:25.954 04:26:40 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:29:25.954 04:26:40 -- common/autotest_common.sh@632 -- # type -t rpc_cmd 00:29:25.954 04:26:40 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:29:25.954 04:26:40 -- common/autotest_common.sh@643 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:29:25.954 04:26:40 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:25.954 04:26:40 -- common/autotest_common.sh@10 -- # set +x 00:29:25.954 request: 00:29:25.954 { 00:29:25.954 "name": "NVMe0", 00:29:25.954 "trtype": "tcp", 00:29:25.954 "traddr": "10.0.0.2", 00:29:25.954 "hostaddr": "10.0.0.2", 00:29:25.954 "hostsvcid": "60000", 00:29:25.954 "adrfam": "ipv4", 00:29:25.954 "trsvcid": "4420", 00:29:25.954 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:29:25.954 "method": "bdev_nvme_attach_controller", 00:29:25.954 "req_id": 1 00:29:25.954 } 00:29:25.954 Got JSON-RPC error response 00:29:25.954 response: 00:29:25.954 { 00:29:25.954 "code": -114, 00:29:25.954 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:29:25.954 } 00:29:25.954 04:26:40 -- common/autotest_common.sh@579 -- # [[ 1 == 0 ]] 00:29:25.954 04:26:40 -- common/autotest_common.sh@643 -- # es=1 00:29:25.954 04:26:40 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:29:25.954 04:26:40 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:29:25.954 04:26:40 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:29:25.954 04:26:40 -- host/multicontroller.sh@69 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:29:25.954 04:26:40 -- common/autotest_common.sh@640 -- # local es=0 00:29:25.954 04:26:40 -- common/autotest_common.sh@642 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:29:25.954 04:26:40 -- common/autotest_common.sh@628 -- # local arg=rpc_cmd 00:29:25.954 04:26:40 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:29:25.954 04:26:40 -- common/autotest_common.sh@632 -- # type -t rpc_cmd 00:29:25.954 04:26:40 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:29:25.954 04:26:40 -- common/autotest_common.sh@643 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:29:25.954 04:26:40 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:25.954 04:26:40 -- common/autotest_common.sh@10 -- # set +x 00:29:25.954 request: 00:29:25.954 { 00:29:25.954 "name": "NVMe0", 00:29:25.955 "trtype": "tcp", 00:29:25.955 "traddr": "10.0.0.2", 00:29:25.955 "hostaddr": "10.0.0.2", 00:29:25.955 "hostsvcid": "60000", 00:29:25.955 "adrfam": "ipv4", 00:29:25.955 "trsvcid": "4420", 00:29:25.955 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:29:25.955 "multipath": "disable", 00:29:25.955 "method": "bdev_nvme_attach_controller", 00:29:25.955 "req_id": 1 00:29:25.955 } 00:29:25.955 Got JSON-RPC error response 00:29:25.955 response: 00:29:25.955 { 00:29:25.955 "code": -114, 00:29:25.955 "message": "A controller named NVMe0 already exists and multipath is disabled\n" 00:29:25.955 } 00:29:25.955 04:26:40 -- common/autotest_common.sh@579 -- # [[ 1 == 0 ]] 00:29:25.955 04:26:40 -- common/autotest_common.sh@643 -- # es=1 00:29:25.955 04:26:40 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:29:25.955 04:26:40 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:29:25.955 04:26:40 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:29:25.955 04:26:40 -- host/multicontroller.sh@74 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:29:25.955 04:26:40 -- common/autotest_common.sh@640 -- # local es=0 00:29:25.955 04:26:40 -- common/autotest_common.sh@642 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:29:25.955 04:26:40 -- common/autotest_common.sh@628 -- # local arg=rpc_cmd 00:29:25.955 04:26:40 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:29:25.955 04:26:40 -- common/autotest_common.sh@632 -- # type -t rpc_cmd 00:29:25.955 04:26:40 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:29:25.955 04:26:40 -- common/autotest_common.sh@643 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:29:25.955 04:26:40 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:25.955 04:26:40 -- common/autotest_common.sh@10 -- # set +x 00:29:25.955 request: 00:29:25.955 { 00:29:25.955 "name": "NVMe0", 00:29:25.955 "trtype": "tcp", 00:29:25.955 "traddr": "10.0.0.2", 00:29:25.955 "hostaddr": "10.0.0.2", 00:29:25.955 "hostsvcid": "60000", 00:29:25.955 "adrfam": "ipv4", 00:29:25.955 "trsvcid": "4420", 00:29:25.955 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:29:25.955 "multipath": "failover", 00:29:25.955 "method": "bdev_nvme_attach_controller", 00:29:25.955 "req_id": 1 00:29:25.955 } 00:29:25.955 Got JSON-RPC error response 00:29:25.955 response: 00:29:25.955 { 00:29:25.955 "code": -114, 00:29:25.955 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:29:25.955 } 00:29:25.955 04:26:40 -- common/autotest_common.sh@579 -- # [[ 1 == 0 ]] 00:29:25.955 04:26:40 -- common/autotest_common.sh@643 -- # es=1 00:29:25.955 04:26:40 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:29:25.955 04:26:40 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:29:25.955 04:26:40 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:29:25.955 04:26:40 -- host/multicontroller.sh@79 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:29:25.955 04:26:40 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:25.955 04:26:40 -- common/autotest_common.sh@10 -- # set +x 00:29:25.955 00:29:25.955 04:26:40 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:25.955 04:26:40 -- host/multicontroller.sh@83 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:29:25.955 04:26:40 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:25.955 04:26:40 -- common/autotest_common.sh@10 -- # set +x 00:29:25.955 04:26:40 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:25.955 04:26:40 -- host/multicontroller.sh@87 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe1 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:29:25.955 04:26:40 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:25.955 04:26:40 -- common/autotest_common.sh@10 -- # set +x 00:29:26.215 00:29:26.215 04:26:40 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:26.215 04:26:40 -- host/multicontroller.sh@90 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:29:26.215 04:26:40 -- host/multicontroller.sh@90 -- # grep -c NVMe 00:29:26.215 04:26:40 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:26.215 04:26:40 -- common/autotest_common.sh@10 -- # set +x 00:29:26.215 04:26:40 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:26.215 04:26:40 -- host/multicontroller.sh@90 -- # '[' 2 '!=' 2 ']' 00:29:26.215 04:26:40 -- host/multicontroller.sh@95 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:29:27.154 0 00:29:27.154 04:26:41 -- host/multicontroller.sh@98 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe1 00:29:27.154 04:26:41 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:27.155 04:26:41 -- common/autotest_common.sh@10 -- # set +x 00:29:27.413 04:26:41 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:27.413 04:26:41 -- host/multicontroller.sh@100 -- # killprocess 4166543 00:29:27.413 04:26:41 -- common/autotest_common.sh@926 -- # '[' -z 4166543 ']' 00:29:27.413 04:26:41 -- common/autotest_common.sh@930 -- # kill -0 4166543 00:29:27.413 04:26:41 -- common/autotest_common.sh@931 -- # uname 00:29:27.413 04:26:41 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:29:27.413 04:26:41 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 4166543 00:29:27.413 04:26:41 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:29:27.413 04:26:41 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:29:27.413 04:26:41 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 4166543' 00:29:27.413 killing process with pid 4166543 00:29:27.413 04:26:41 -- common/autotest_common.sh@945 -- # kill 4166543 00:29:27.413 04:26:41 -- common/autotest_common.sh@950 -- # wait 4166543 00:29:27.671 04:26:42 -- host/multicontroller.sh@102 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:29:27.671 04:26:42 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:27.671 04:26:42 -- common/autotest_common.sh@10 -- # set +x 00:29:27.671 04:26:42 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:27.671 04:26:42 -- host/multicontroller.sh@103 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:29:27.671 04:26:42 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:27.671 04:26:42 -- common/autotest_common.sh@10 -- # set +x 00:29:27.671 04:26:42 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:27.671 04:26:42 -- host/multicontroller.sh@105 -- # trap - SIGINT SIGTERM EXIT 00:29:27.671 04:26:42 -- host/multicontroller.sh@107 -- # pap /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/host/try.txt 00:29:27.671 04:26:42 -- common/autotest_common.sh@1597 -- # read -r file 00:29:27.671 04:26:42 -- common/autotest_common.sh@1596 -- # sort -u 00:29:27.671 04:26:42 -- common/autotest_common.sh@1596 -- # find /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/host/try.txt -type f 00:29:27.671 04:26:42 -- common/autotest_common.sh@1598 -- # cat 00:29:27.671 --- /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:29:27.671 [2024-05-14 04:26:39.384171] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:29:27.671 [2024-05-14 04:26:39.384333] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4166543 ] 00:29:27.671 EAL: No free 2048 kB hugepages reported on node 1 00:29:27.671 [2024-05-14 04:26:39.512579] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:27.671 [2024-05-14 04:26:39.603495] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:29:27.671 [2024-05-14 04:26:40.635387] bdev.c:4548:bdev_name_add: *ERROR*: Bdev name 7ddad3a1-01fc-45a2-98c6-b2690af649bc already exists 00:29:27.671 [2024-05-14 04:26:40.635431] bdev.c:7598:bdev_register: *ERROR*: Unable to add uuid:7ddad3a1-01fc-45a2-98c6-b2690af649bc alias for bdev NVMe1n1 00:29:27.671 [2024-05-14 04:26:40.635445] bdev_nvme.c:4230:nvme_bdev_create: *ERROR*: spdk_bdev_register() failed 00:29:27.671 Running I/O for 1 seconds... 00:29:27.671 00:29:27.671 Latency(us) 00:29:27.671 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:27.671 Job: NVMe0n1 (Core Mask 0x1, workload: write, depth: 128, IO size: 4096) 00:29:27.671 NVMe0n1 : 1.00 26219.32 102.42 0.00 0.00 4871.93 2949.12 15107.77 00:29:27.671 =================================================================================================================== 00:29:27.671 Total : 26219.32 102.42 0.00 0.00 4871.93 2949.12 15107.77 00:29:27.671 Received shutdown signal, test time was about 1.000000 seconds 00:29:27.671 00:29:27.671 Latency(us) 00:29:27.671 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:27.671 =================================================================================================================== 00:29:27.671 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:27.671 --- /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:29:27.671 04:26:42 -- common/autotest_common.sh@1603 -- # rm -f /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/host/try.txt 00:29:27.671 04:26:42 -- common/autotest_common.sh@1597 -- # read -r file 00:29:27.671 04:26:42 -- host/multicontroller.sh@108 -- # nvmftestfini 00:29:27.671 04:26:42 -- nvmf/common.sh@476 -- # nvmfcleanup 00:29:27.671 04:26:42 -- nvmf/common.sh@116 -- # sync 00:29:27.671 04:26:42 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:29:27.671 04:26:42 -- nvmf/common.sh@119 -- # set +e 00:29:27.671 04:26:42 -- nvmf/common.sh@120 -- # for i in {1..20} 00:29:27.671 04:26:42 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:29:27.671 rmmod nvme_tcp 00:29:27.671 rmmod nvme_fabrics 00:29:27.671 rmmod nvme_keyring 00:29:27.671 04:26:42 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:29:27.671 04:26:42 -- nvmf/common.sh@123 -- # set -e 00:29:27.671 04:26:42 -- nvmf/common.sh@124 -- # return 0 00:29:27.671 04:26:42 -- nvmf/common.sh@477 -- # '[' -n 4166228 ']' 00:29:27.671 04:26:42 -- nvmf/common.sh@478 -- # killprocess 4166228 00:29:27.671 04:26:42 -- common/autotest_common.sh@926 -- # '[' -z 4166228 ']' 00:29:27.671 04:26:42 -- common/autotest_common.sh@930 -- # kill -0 4166228 00:29:27.671 04:26:42 -- common/autotest_common.sh@931 -- # uname 00:29:27.929 04:26:42 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:29:27.930 04:26:42 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 4166228 00:29:27.930 04:26:42 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:29:27.930 04:26:42 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:29:27.930 04:26:42 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 4166228' 00:29:27.930 killing process with pid 4166228 00:29:27.930 04:26:42 -- common/autotest_common.sh@945 -- # kill 4166228 00:29:27.930 04:26:42 -- common/autotest_common.sh@950 -- # wait 4166228 00:29:28.499 04:26:42 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:29:28.499 04:26:42 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:29:28.499 04:26:42 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:29:28.499 04:26:42 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:29:28.499 04:26:42 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:29:28.499 04:26:42 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:28.499 04:26:42 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:29:28.499 04:26:42 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:30.403 04:26:44 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:29:30.403 00:29:30.403 real 0m12.508s 00:29:30.403 user 0m16.986s 00:29:30.403 sys 0m5.233s 00:29:30.403 04:26:44 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:30.403 04:26:44 -- common/autotest_common.sh@10 -- # set +x 00:29:30.404 ************************************ 00:29:30.404 END TEST nvmf_multicontroller 00:29:30.404 ************************************ 00:29:30.404 04:26:44 -- nvmf/nvmf.sh@91 -- # run_test nvmf_aer /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:29:30.404 04:26:44 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:29:30.404 04:26:44 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:29:30.404 04:26:44 -- common/autotest_common.sh@10 -- # set +x 00:29:30.404 ************************************ 00:29:30.404 START TEST nvmf_aer 00:29:30.404 ************************************ 00:29:30.404 04:26:44 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:29:30.663 * Looking for test storage... 00:29:30.663 * Found test storage at /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/host 00:29:30.663 04:26:45 -- host/aer.sh@9 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/common.sh 00:29:30.663 04:26:45 -- nvmf/common.sh@7 -- # uname -s 00:29:30.663 04:26:45 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:30.663 04:26:45 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:30.663 04:26:45 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:30.663 04:26:45 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:30.663 04:26:45 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:30.663 04:26:45 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:30.663 04:26:45 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:30.663 04:26:45 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:30.663 04:26:45 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:30.663 04:26:45 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:30.663 04:26:45 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80ef6226-405e-ee11-906e-a4bf01973fda 00:29:30.663 04:26:45 -- nvmf/common.sh@18 -- # NVME_HOSTID=80ef6226-405e-ee11-906e-a4bf01973fda 00:29:30.663 04:26:45 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:30.663 04:26:45 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:30.663 04:26:45 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:29:30.663 04:26:45 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/common.sh 00:29:30.663 04:26:45 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:30.663 04:26:45 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:30.663 04:26:45 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:30.663 04:26:45 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:30.663 04:26:45 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:30.664 04:26:45 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:30.664 04:26:45 -- paths/export.sh@5 -- # export PATH 00:29:30.664 04:26:45 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:30.664 04:26:45 -- nvmf/common.sh@46 -- # : 0 00:29:30.664 04:26:45 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:29:30.664 04:26:45 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:29:30.664 04:26:45 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:29:30.664 04:26:45 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:30.664 04:26:45 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:30.664 04:26:45 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:29:30.664 04:26:45 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:29:30.664 04:26:45 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:29:30.664 04:26:45 -- host/aer.sh@11 -- # nvmftestinit 00:29:30.664 04:26:45 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:29:30.664 04:26:45 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:30.664 04:26:45 -- nvmf/common.sh@436 -- # prepare_net_devs 00:29:30.664 04:26:45 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:29:30.664 04:26:45 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:29:30.664 04:26:45 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:30.664 04:26:45 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:29:30.664 04:26:45 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:30.664 04:26:45 -- nvmf/common.sh@402 -- # [[ phy-fallback != virt ]] 00:29:30.664 04:26:45 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:29:30.664 04:26:45 -- nvmf/common.sh@284 -- # xtrace_disable 00:29:30.664 04:26:45 -- common/autotest_common.sh@10 -- # set +x 00:29:35.932 04:26:50 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:29:35.932 04:26:50 -- nvmf/common.sh@290 -- # pci_devs=() 00:29:35.932 04:26:50 -- nvmf/common.sh@290 -- # local -a pci_devs 00:29:35.932 04:26:50 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:29:35.932 04:26:50 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:29:35.932 04:26:50 -- nvmf/common.sh@292 -- # pci_drivers=() 00:29:35.932 04:26:50 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:29:35.932 04:26:50 -- nvmf/common.sh@294 -- # net_devs=() 00:29:35.932 04:26:50 -- nvmf/common.sh@294 -- # local -ga net_devs 00:29:35.932 04:26:50 -- nvmf/common.sh@295 -- # e810=() 00:29:35.932 04:26:50 -- nvmf/common.sh@295 -- # local -ga e810 00:29:35.932 04:26:50 -- nvmf/common.sh@296 -- # x722=() 00:29:35.932 04:26:50 -- nvmf/common.sh@296 -- # local -ga x722 00:29:35.932 04:26:50 -- nvmf/common.sh@297 -- # mlx=() 00:29:35.932 04:26:50 -- nvmf/common.sh@297 -- # local -ga mlx 00:29:35.932 04:26:50 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:35.932 04:26:50 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:35.932 04:26:50 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:35.932 04:26:50 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:35.932 04:26:50 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:35.932 04:26:50 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:35.932 04:26:50 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:35.932 04:26:50 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:35.932 04:26:50 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:35.932 04:26:50 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:35.932 04:26:50 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:35.932 04:26:50 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:29:35.932 04:26:50 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:29:35.932 04:26:50 -- nvmf/common.sh@326 -- # [[ '' == mlx5 ]] 00:29:35.932 04:26:50 -- nvmf/common.sh@328 -- # [[ '' == e810 ]] 00:29:35.932 04:26:50 -- nvmf/common.sh@330 -- # [[ '' == x722 ]] 00:29:35.932 04:26:50 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:29:35.932 04:26:50 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:29:35.932 04:26:50 -- nvmf/common.sh@340 -- # echo 'Found 0000:27:00.0 (0x8086 - 0x159b)' 00:29:35.932 Found 0000:27:00.0 (0x8086 - 0x159b) 00:29:35.932 04:26:50 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:29:35.932 04:26:50 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:29:35.932 04:26:50 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:35.932 04:26:50 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:35.932 04:26:50 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:29:35.932 04:26:50 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:29:35.932 04:26:50 -- nvmf/common.sh@340 -- # echo 'Found 0000:27:00.1 (0x8086 - 0x159b)' 00:29:35.932 Found 0000:27:00.1 (0x8086 - 0x159b) 00:29:35.932 04:26:50 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:29:35.932 04:26:50 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:29:35.932 04:26:50 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:35.932 04:26:50 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:35.932 04:26:50 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:29:35.932 04:26:50 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:29:35.932 04:26:50 -- nvmf/common.sh@371 -- # [[ '' == e810 ]] 00:29:35.932 04:26:50 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:29:35.932 04:26:50 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:35.932 04:26:50 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:29:35.932 04:26:50 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:35.932 04:26:50 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:27:00.0: cvl_0_0' 00:29:35.932 Found net devices under 0000:27:00.0: cvl_0_0 00:29:35.932 04:26:50 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:29:35.932 04:26:50 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:29:35.932 04:26:50 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:35.932 04:26:50 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:29:35.932 04:26:50 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:35.932 04:26:50 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:27:00.1: cvl_0_1' 00:29:35.932 Found net devices under 0000:27:00.1: cvl_0_1 00:29:35.932 04:26:50 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:29:35.932 04:26:50 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:29:35.932 04:26:50 -- nvmf/common.sh@402 -- # is_hw=yes 00:29:35.932 04:26:50 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:29:35.932 04:26:50 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:29:35.932 04:26:50 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:29:35.932 04:26:50 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:35.932 04:26:50 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:35.932 04:26:50 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:35.932 04:26:50 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:29:35.932 04:26:50 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:35.932 04:26:50 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:35.932 04:26:50 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:29:35.932 04:26:50 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:35.932 04:26:50 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:35.932 04:26:50 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:29:35.932 04:26:50 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:29:35.932 04:26:50 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:29:35.932 04:26:50 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:35.932 04:26:50 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:35.932 04:26:50 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:35.932 04:26:50 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:29:35.932 04:26:50 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:35.932 04:26:50 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:35.932 04:26:50 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:35.932 04:26:50 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:29:35.932 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:35.932 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.347 ms 00:29:35.932 00:29:35.932 --- 10.0.0.2 ping statistics --- 00:29:35.932 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:35.932 rtt min/avg/max/mdev = 0.347/0.347/0.347/0.000 ms 00:29:35.932 04:26:50 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:35.932 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:35.932 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.392 ms 00:29:35.932 00:29:35.932 --- 10.0.0.1 ping statistics --- 00:29:35.932 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:35.932 rtt min/avg/max/mdev = 0.392/0.392/0.392/0.000 ms 00:29:35.932 04:26:50 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:35.932 04:26:50 -- nvmf/common.sh@410 -- # return 0 00:29:35.932 04:26:50 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:29:35.932 04:26:50 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:35.932 04:26:50 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:29:35.933 04:26:50 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:29:35.933 04:26:50 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:35.933 04:26:50 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:29:35.933 04:26:50 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:29:35.933 04:26:50 -- host/aer.sh@12 -- # nvmfappstart -m 0xF 00:29:35.933 04:26:50 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:29:35.933 04:26:50 -- common/autotest_common.sh@712 -- # xtrace_disable 00:29:35.933 04:26:50 -- common/autotest_common.sh@10 -- # set +x 00:29:35.933 04:26:50 -- nvmf/common.sh@469 -- # nvmfpid=4171072 00:29:35.933 04:26:50 -- nvmf/common.sh@470 -- # waitforlisten 4171072 00:29:35.933 04:26:50 -- common/autotest_common.sh@819 -- # '[' -z 4171072 ']' 00:29:35.933 04:26:50 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:35.933 04:26:50 -- common/autotest_common.sh@824 -- # local max_retries=100 00:29:35.933 04:26:50 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:35.933 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:35.933 04:26:50 -- common/autotest_common.sh@828 -- # xtrace_disable 00:29:35.933 04:26:50 -- common/autotest_common.sh@10 -- # set +x 00:29:35.933 04:26:50 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:29:35.933 [2024-05-14 04:26:50.501274] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:29:35.933 [2024-05-14 04:26:50.501376] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:36.190 EAL: No free 2048 kB hugepages reported on node 1 00:29:36.190 [2024-05-14 04:26:50.620019] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:29:36.190 [2024-05-14 04:26:50.714671] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:29:36.190 [2024-05-14 04:26:50.714832] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:36.190 [2024-05-14 04:26:50.714845] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:36.190 [2024-05-14 04:26:50.714854] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:36.190 [2024-05-14 04:26:50.715009] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:29:36.190 [2024-05-14 04:26:50.715105] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:29:36.190 [2024-05-14 04:26:50.715219] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:29:36.190 [2024-05-14 04:26:50.715228] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:29:36.758 04:26:51 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:29:36.758 04:26:51 -- common/autotest_common.sh@852 -- # return 0 00:29:36.758 04:26:51 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:29:36.758 04:26:51 -- common/autotest_common.sh@718 -- # xtrace_disable 00:29:36.758 04:26:51 -- common/autotest_common.sh@10 -- # set +x 00:29:36.758 04:26:51 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:36.758 04:26:51 -- host/aer.sh@14 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:29:36.758 04:26:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:36.758 04:26:51 -- common/autotest_common.sh@10 -- # set +x 00:29:36.758 [2024-05-14 04:26:51.234871] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:36.758 04:26:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:36.758 04:26:51 -- host/aer.sh@16 -- # rpc_cmd bdev_malloc_create 64 512 --name Malloc0 00:29:36.758 04:26:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:36.758 04:26:51 -- common/autotest_common.sh@10 -- # set +x 00:29:36.758 Malloc0 00:29:36.758 04:26:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:36.758 04:26:51 -- host/aer.sh@17 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 2 00:29:36.758 04:26:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:36.758 04:26:51 -- common/autotest_common.sh@10 -- # set +x 00:29:36.758 04:26:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:36.758 04:26:51 -- host/aer.sh@18 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:29:36.758 04:26:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:36.759 04:26:51 -- common/autotest_common.sh@10 -- # set +x 00:29:36.759 04:26:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:36.759 04:26:51 -- host/aer.sh@19 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:36.759 04:26:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:36.759 04:26:51 -- common/autotest_common.sh@10 -- # set +x 00:29:36.759 [2024-05-14 04:26:51.294744] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:36.759 04:26:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:36.759 04:26:51 -- host/aer.sh@21 -- # rpc_cmd nvmf_get_subsystems 00:29:36.759 04:26:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:36.759 04:26:51 -- common/autotest_common.sh@10 -- # set +x 00:29:36.759 [2024-05-14 04:26:51.302443] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:29:36.759 [ 00:29:36.759 { 00:29:36.759 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:29:36.759 "subtype": "Discovery", 00:29:36.759 "listen_addresses": [], 00:29:36.759 "allow_any_host": true, 00:29:36.759 "hosts": [] 00:29:36.759 }, 00:29:36.759 { 00:29:36.759 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:29:36.759 "subtype": "NVMe", 00:29:36.759 "listen_addresses": [ 00:29:36.759 { 00:29:36.759 "transport": "TCP", 00:29:36.759 "trtype": "TCP", 00:29:36.759 "adrfam": "IPv4", 00:29:36.759 "traddr": "10.0.0.2", 00:29:36.759 "trsvcid": "4420" 00:29:36.759 } 00:29:36.759 ], 00:29:36.759 "allow_any_host": true, 00:29:36.759 "hosts": [], 00:29:36.759 "serial_number": "SPDK00000000000001", 00:29:36.759 "model_number": "SPDK bdev Controller", 00:29:36.759 "max_namespaces": 2, 00:29:36.759 "min_cntlid": 1, 00:29:36.759 "max_cntlid": 65519, 00:29:36.759 "namespaces": [ 00:29:36.759 { 00:29:36.759 "nsid": 1, 00:29:36.759 "bdev_name": "Malloc0", 00:29:36.759 "name": "Malloc0", 00:29:36.759 "nguid": "6DCBE5FE70834C4B96E2A1AB747F0ECC", 00:29:36.759 "uuid": "6dcbe5fe-7083-4c4b-96e2-a1ab747f0ecc" 00:29:36.759 } 00:29:36.759 ] 00:29:36.759 } 00:29:36.759 ] 00:29:36.759 04:26:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:36.759 04:26:51 -- host/aer.sh@23 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:29:36.759 04:26:51 -- host/aer.sh@24 -- # rm -f /tmp/aer_touch_file 00:29:36.759 04:26:51 -- host/aer.sh@33 -- # aerpid=4171244 00:29:36.759 04:26:51 -- host/aer.sh@36 -- # waitforfile /tmp/aer_touch_file 00:29:36.759 04:26:51 -- common/autotest_common.sh@1244 -- # local i=0 00:29:36.759 04:26:51 -- common/autotest_common.sh@1245 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:29:36.759 04:26:51 -- host/aer.sh@27 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -n 2 -t /tmp/aer_touch_file 00:29:36.759 04:26:51 -- common/autotest_common.sh@1246 -- # '[' 0 -lt 200 ']' 00:29:36.759 04:26:51 -- common/autotest_common.sh@1247 -- # i=1 00:29:36.759 04:26:51 -- common/autotest_common.sh@1248 -- # sleep 0.1 00:29:37.018 04:26:51 -- common/autotest_common.sh@1245 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:29:37.018 04:26:51 -- common/autotest_common.sh@1246 -- # '[' 1 -lt 200 ']' 00:29:37.018 04:26:51 -- common/autotest_common.sh@1247 -- # i=2 00:29:37.018 04:26:51 -- common/autotest_common.sh@1248 -- # sleep 0.1 00:29:37.018 EAL: No free 2048 kB hugepages reported on node 1 00:29:37.018 04:26:51 -- common/autotest_common.sh@1245 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:29:37.018 04:26:51 -- common/autotest_common.sh@1246 -- # '[' 2 -lt 200 ']' 00:29:37.018 04:26:51 -- common/autotest_common.sh@1247 -- # i=3 00:29:37.018 04:26:51 -- common/autotest_common.sh@1248 -- # sleep 0.1 00:29:37.278 04:26:51 -- common/autotest_common.sh@1245 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:29:37.278 04:26:51 -- common/autotest_common.sh@1246 -- # '[' 3 -lt 200 ']' 00:29:37.278 04:26:51 -- common/autotest_common.sh@1247 -- # i=4 00:29:37.278 04:26:51 -- common/autotest_common.sh@1248 -- # sleep 0.1 00:29:37.278 04:26:51 -- common/autotest_common.sh@1245 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:29:37.279 04:26:51 -- common/autotest_common.sh@1251 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:29:37.279 04:26:51 -- common/autotest_common.sh@1255 -- # return 0 00:29:37.279 04:26:51 -- host/aer.sh@39 -- # rpc_cmd bdev_malloc_create 64 4096 --name Malloc1 00:29:37.279 04:26:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:37.279 04:26:51 -- common/autotest_common.sh@10 -- # set +x 00:29:37.279 Malloc1 00:29:37.279 04:26:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:37.279 04:26:51 -- host/aer.sh@40 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 2 00:29:37.279 04:26:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:37.279 04:26:51 -- common/autotest_common.sh@10 -- # set +x 00:29:37.279 04:26:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:37.279 04:26:51 -- host/aer.sh@41 -- # rpc_cmd nvmf_get_subsystems 00:29:37.279 04:26:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:37.279 04:26:51 -- common/autotest_common.sh@10 -- # set +x 00:29:37.279 [ 00:29:37.279 { 00:29:37.279 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:29:37.279 "subtype": "Discovery", 00:29:37.279 "listen_addresses": [], 00:29:37.279 "allow_any_host": true, 00:29:37.279 "hosts": [] 00:29:37.279 }, 00:29:37.279 { 00:29:37.279 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:29:37.279 "subtype": "NVMe", 00:29:37.279 "listen_addresses": [ 00:29:37.279 { 00:29:37.279 "transport": "TCP", 00:29:37.279 "trtype": "TCP", 00:29:37.279 "adrfam": "IPv4", 00:29:37.279 "traddr": "10.0.0.2", 00:29:37.279 "trsvcid": "4420" 00:29:37.279 } 00:29:37.279 ], 00:29:37.279 "allow_any_host": true, 00:29:37.279 "hosts": [], 00:29:37.279 "serial_number": "SPDK00000000000001", 00:29:37.279 "model_number": "SPDK bdev Controller", 00:29:37.279 "max_namespaces": 2, 00:29:37.279 "min_cntlid": 1, 00:29:37.279 "max_cntlid": 65519, 00:29:37.279 "namespaces": [ 00:29:37.279 { 00:29:37.279 "nsid": 1, 00:29:37.279 "bdev_name": "Malloc0", 00:29:37.279 "name": "Malloc0", 00:29:37.279 "nguid": "6DCBE5FE70834C4B96E2A1AB747F0ECC", 00:29:37.279 "uuid": "6dcbe5fe-7083-4c4b-96e2-a1ab747f0ecc" 00:29:37.279 }, 00:29:37.279 { 00:29:37.279 "nsid": 2, 00:29:37.279 "bdev_name": "Malloc1", 00:29:37.279 "name": "Malloc1", 00:29:37.279 "nguid": "C520468930A8468294E6979EA3DADE97", 00:29:37.279 "uuid": "c5204689-30a8-4682-94e6-979ea3dade97" 00:29:37.279 } 00:29:37.279 ] 00:29:37.279 } 00:29:37.279 ] 00:29:37.279 04:26:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:37.279 04:26:51 -- host/aer.sh@43 -- # wait 4171244 00:29:37.279 Asynchronous Event Request test 00:29:37.279 Attaching to 10.0.0.2 00:29:37.279 Attached to 10.0.0.2 00:29:37.279 Registering asynchronous event callbacks... 00:29:37.279 Starting namespace attribute notice tests for all controllers... 00:29:37.279 10.0.0.2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:29:37.279 aer_cb - Changed Namespace 00:29:37.279 Cleaning up... 00:29:37.279 04:26:51 -- host/aer.sh@45 -- # rpc_cmd bdev_malloc_delete Malloc0 00:29:37.279 04:26:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:37.279 04:26:51 -- common/autotest_common.sh@10 -- # set +x 00:29:37.538 04:26:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:37.538 04:26:51 -- host/aer.sh@46 -- # rpc_cmd bdev_malloc_delete Malloc1 00:29:37.538 04:26:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:37.538 04:26:51 -- common/autotest_common.sh@10 -- # set +x 00:29:37.538 04:26:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:37.538 04:26:51 -- host/aer.sh@47 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:29:37.538 04:26:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:37.538 04:26:51 -- common/autotest_common.sh@10 -- # set +x 00:29:37.538 04:26:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:37.538 04:26:51 -- host/aer.sh@49 -- # trap - SIGINT SIGTERM EXIT 00:29:37.538 04:26:51 -- host/aer.sh@51 -- # nvmftestfini 00:29:37.538 04:26:51 -- nvmf/common.sh@476 -- # nvmfcleanup 00:29:37.538 04:26:51 -- nvmf/common.sh@116 -- # sync 00:29:37.538 04:26:52 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:29:37.538 04:26:52 -- nvmf/common.sh@119 -- # set +e 00:29:37.538 04:26:52 -- nvmf/common.sh@120 -- # for i in {1..20} 00:29:37.538 04:26:52 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:29:37.538 rmmod nvme_tcp 00:29:37.538 rmmod nvme_fabrics 00:29:37.538 rmmod nvme_keyring 00:29:37.538 04:26:52 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:29:37.538 04:26:52 -- nvmf/common.sh@123 -- # set -e 00:29:37.538 04:26:52 -- nvmf/common.sh@124 -- # return 0 00:29:37.538 04:26:52 -- nvmf/common.sh@477 -- # '[' -n 4171072 ']' 00:29:37.538 04:26:52 -- nvmf/common.sh@478 -- # killprocess 4171072 00:29:37.538 04:26:52 -- common/autotest_common.sh@926 -- # '[' -z 4171072 ']' 00:29:37.538 04:26:52 -- common/autotest_common.sh@930 -- # kill -0 4171072 00:29:37.538 04:26:52 -- common/autotest_common.sh@931 -- # uname 00:29:37.538 04:26:52 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:29:37.538 04:26:52 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 4171072 00:29:37.797 04:26:52 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:29:37.797 04:26:52 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:29:37.797 04:26:52 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 4171072' 00:29:37.797 killing process with pid 4171072 00:29:37.797 04:26:52 -- common/autotest_common.sh@945 -- # kill 4171072 00:29:37.797 [2024-05-14 04:26:52.131333] app.c: 883:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:29:37.797 04:26:52 -- common/autotest_common.sh@950 -- # wait 4171072 00:29:38.055 04:26:52 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:29:38.055 04:26:52 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:29:38.055 04:26:52 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:29:38.055 04:26:52 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:29:38.055 04:26:52 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:29:38.055 04:26:52 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:38.055 04:26:52 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:29:38.055 04:26:52 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:40.590 04:26:54 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:29:40.590 00:29:40.590 real 0m9.689s 00:29:40.590 user 0m8.465s 00:29:40.590 sys 0m4.588s 00:29:40.590 04:26:54 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:40.590 04:26:54 -- common/autotest_common.sh@10 -- # set +x 00:29:40.590 ************************************ 00:29:40.590 END TEST nvmf_aer 00:29:40.590 ************************************ 00:29:40.590 04:26:54 -- nvmf/nvmf.sh@92 -- # run_test nvmf_async_init /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:29:40.590 04:26:54 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:29:40.590 04:26:54 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:29:40.590 04:26:54 -- common/autotest_common.sh@10 -- # set +x 00:29:40.590 ************************************ 00:29:40.590 START TEST nvmf_async_init 00:29:40.590 ************************************ 00:29:40.590 04:26:54 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:29:40.590 * Looking for test storage... 00:29:40.590 * Found test storage at /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/host 00:29:40.590 04:26:54 -- host/async_init.sh@11 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/common.sh 00:29:40.590 04:26:54 -- nvmf/common.sh@7 -- # uname -s 00:29:40.590 04:26:54 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:40.590 04:26:54 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:40.590 04:26:54 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:40.590 04:26:54 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:40.590 04:26:54 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:40.590 04:26:54 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:40.590 04:26:54 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:40.590 04:26:54 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:40.590 04:26:54 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:40.590 04:26:54 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:40.590 04:26:54 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80ef6226-405e-ee11-906e-a4bf01973fda 00:29:40.590 04:26:54 -- nvmf/common.sh@18 -- # NVME_HOSTID=80ef6226-405e-ee11-906e-a4bf01973fda 00:29:40.590 04:26:54 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:40.590 04:26:54 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:40.590 04:26:54 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:29:40.590 04:26:54 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/common.sh 00:29:40.590 04:26:54 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:40.590 04:26:54 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:40.590 04:26:54 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:40.590 04:26:54 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:40.590 04:26:54 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:40.590 04:26:54 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:40.590 04:26:54 -- paths/export.sh@5 -- # export PATH 00:29:40.590 04:26:54 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:40.590 04:26:54 -- nvmf/common.sh@46 -- # : 0 00:29:40.590 04:26:54 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:29:40.590 04:26:54 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:29:40.590 04:26:54 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:29:40.590 04:26:54 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:40.590 04:26:54 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:40.590 04:26:54 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:29:40.590 04:26:54 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:29:40.590 04:26:54 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:29:40.590 04:26:54 -- host/async_init.sh@13 -- # null_bdev_size=1024 00:29:40.590 04:26:54 -- host/async_init.sh@14 -- # null_block_size=512 00:29:40.590 04:26:54 -- host/async_init.sh@15 -- # null_bdev=null0 00:29:40.590 04:26:54 -- host/async_init.sh@16 -- # nvme_bdev=nvme0 00:29:40.590 04:26:54 -- host/async_init.sh@20 -- # uuidgen 00:29:40.590 04:26:54 -- host/async_init.sh@20 -- # tr -d - 00:29:40.590 04:26:54 -- host/async_init.sh@20 -- # nguid=91a5f38d620749089eb2d04fc5ac2f7e 00:29:40.590 04:26:54 -- host/async_init.sh@22 -- # nvmftestinit 00:29:40.590 04:26:54 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:29:40.590 04:26:54 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:40.590 04:26:54 -- nvmf/common.sh@436 -- # prepare_net_devs 00:29:40.590 04:26:54 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:29:40.590 04:26:54 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:29:40.590 04:26:54 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:40.590 04:26:54 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:29:40.590 04:26:54 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:40.590 04:26:54 -- nvmf/common.sh@402 -- # [[ phy-fallback != virt ]] 00:29:40.590 04:26:54 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:29:40.590 04:26:54 -- nvmf/common.sh@284 -- # xtrace_disable 00:29:40.590 04:26:54 -- common/autotest_common.sh@10 -- # set +x 00:29:45.919 04:27:00 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:29:45.919 04:27:00 -- nvmf/common.sh@290 -- # pci_devs=() 00:29:45.919 04:27:00 -- nvmf/common.sh@290 -- # local -a pci_devs 00:29:45.919 04:27:00 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:29:45.919 04:27:00 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:29:45.919 04:27:00 -- nvmf/common.sh@292 -- # pci_drivers=() 00:29:45.919 04:27:00 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:29:45.919 04:27:00 -- nvmf/common.sh@294 -- # net_devs=() 00:29:45.919 04:27:00 -- nvmf/common.sh@294 -- # local -ga net_devs 00:29:45.919 04:27:00 -- nvmf/common.sh@295 -- # e810=() 00:29:45.919 04:27:00 -- nvmf/common.sh@295 -- # local -ga e810 00:29:45.919 04:27:00 -- nvmf/common.sh@296 -- # x722=() 00:29:45.919 04:27:00 -- nvmf/common.sh@296 -- # local -ga x722 00:29:45.919 04:27:00 -- nvmf/common.sh@297 -- # mlx=() 00:29:45.919 04:27:00 -- nvmf/common.sh@297 -- # local -ga mlx 00:29:45.919 04:27:00 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:45.919 04:27:00 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:45.919 04:27:00 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:45.919 04:27:00 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:45.919 04:27:00 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:45.919 04:27:00 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:45.919 04:27:00 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:45.919 04:27:00 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:45.919 04:27:00 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:45.919 04:27:00 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:45.919 04:27:00 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:45.919 04:27:00 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:29:45.919 04:27:00 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:29:45.919 04:27:00 -- nvmf/common.sh@326 -- # [[ '' == mlx5 ]] 00:29:45.919 04:27:00 -- nvmf/common.sh@328 -- # [[ '' == e810 ]] 00:29:45.919 04:27:00 -- nvmf/common.sh@330 -- # [[ '' == x722 ]] 00:29:45.919 04:27:00 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:29:45.919 04:27:00 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:29:45.919 04:27:00 -- nvmf/common.sh@340 -- # echo 'Found 0000:27:00.0 (0x8086 - 0x159b)' 00:29:45.919 Found 0000:27:00.0 (0x8086 - 0x159b) 00:29:45.919 04:27:00 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:29:45.919 04:27:00 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:29:45.919 04:27:00 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:45.919 04:27:00 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:45.919 04:27:00 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:29:45.919 04:27:00 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:29:45.919 04:27:00 -- nvmf/common.sh@340 -- # echo 'Found 0000:27:00.1 (0x8086 - 0x159b)' 00:29:45.919 Found 0000:27:00.1 (0x8086 - 0x159b) 00:29:45.919 04:27:00 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:29:45.919 04:27:00 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:29:45.919 04:27:00 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:45.919 04:27:00 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:45.919 04:27:00 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:29:45.919 04:27:00 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:29:45.919 04:27:00 -- nvmf/common.sh@371 -- # [[ '' == e810 ]] 00:29:45.919 04:27:00 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:29:45.919 04:27:00 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:45.919 04:27:00 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:29:45.919 04:27:00 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:45.919 04:27:00 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:27:00.0: cvl_0_0' 00:29:45.919 Found net devices under 0000:27:00.0: cvl_0_0 00:29:45.919 04:27:00 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:29:45.919 04:27:00 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:29:45.919 04:27:00 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:45.919 04:27:00 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:29:45.919 04:27:00 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:45.919 04:27:00 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:27:00.1: cvl_0_1' 00:29:45.919 Found net devices under 0000:27:00.1: cvl_0_1 00:29:45.919 04:27:00 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:29:45.919 04:27:00 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:29:45.919 04:27:00 -- nvmf/common.sh@402 -- # is_hw=yes 00:29:45.919 04:27:00 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:29:45.919 04:27:00 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:29:45.919 04:27:00 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:29:45.920 04:27:00 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:45.920 04:27:00 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:45.920 04:27:00 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:45.920 04:27:00 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:29:45.920 04:27:00 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:45.920 04:27:00 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:45.920 04:27:00 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:29:45.920 04:27:00 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:45.920 04:27:00 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:45.920 04:27:00 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:29:45.920 04:27:00 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:29:45.920 04:27:00 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:29:45.920 04:27:00 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:45.920 04:27:00 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:45.920 04:27:00 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:46.208 04:27:00 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:29:46.208 04:27:00 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:46.208 04:27:00 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:46.208 04:27:00 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:46.208 04:27:00 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:29:46.208 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:46.208 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.248 ms 00:29:46.208 00:29:46.208 --- 10.0.0.2 ping statistics --- 00:29:46.208 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:46.208 rtt min/avg/max/mdev = 0.248/0.248/0.248/0.000 ms 00:29:46.208 04:27:00 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:46.208 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:46.208 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.118 ms 00:29:46.208 00:29:46.208 --- 10.0.0.1 ping statistics --- 00:29:46.208 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:46.208 rtt min/avg/max/mdev = 0.118/0.118/0.118/0.000 ms 00:29:46.208 04:27:00 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:46.208 04:27:00 -- nvmf/common.sh@410 -- # return 0 00:29:46.208 04:27:00 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:29:46.208 04:27:00 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:46.208 04:27:00 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:29:46.208 04:27:00 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:29:46.208 04:27:00 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:46.208 04:27:00 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:29:46.208 04:27:00 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:29:46.208 04:27:00 -- host/async_init.sh@23 -- # nvmfappstart -m 0x1 00:29:46.208 04:27:00 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:29:46.208 04:27:00 -- common/autotest_common.sh@712 -- # xtrace_disable 00:29:46.208 04:27:00 -- common/autotest_common.sh@10 -- # set +x 00:29:46.208 04:27:00 -- nvmf/common.sh@469 -- # nvmfpid=4175497 00:29:46.208 04:27:00 -- nvmf/common.sh@470 -- # waitforlisten 4175497 00:29:46.208 04:27:00 -- common/autotest_common.sh@819 -- # '[' -z 4175497 ']' 00:29:46.208 04:27:00 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:46.208 04:27:00 -- common/autotest_common.sh@824 -- # local max_retries=100 00:29:46.208 04:27:00 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:46.208 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:46.208 04:27:00 -- common/autotest_common.sh@828 -- # xtrace_disable 00:29:46.208 04:27:00 -- common/autotest_common.sh@10 -- # set +x 00:29:46.208 04:27:00 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:29:46.208 [2024-05-14 04:27:00.746183] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:29:46.208 [2024-05-14 04:27:00.746332] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:46.469 EAL: No free 2048 kB hugepages reported on node 1 00:29:46.469 [2024-05-14 04:27:00.890534] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:46.469 [2024-05-14 04:27:00.982895] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:29:46.469 [2024-05-14 04:27:00.983088] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:46.469 [2024-05-14 04:27:00.983103] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:46.469 [2024-05-14 04:27:00.983112] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:46.469 [2024-05-14 04:27:00.983147] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:29:47.040 04:27:01 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:29:47.040 04:27:01 -- common/autotest_common.sh@852 -- # return 0 00:29:47.040 04:27:01 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:29:47.040 04:27:01 -- common/autotest_common.sh@718 -- # xtrace_disable 00:29:47.040 04:27:01 -- common/autotest_common.sh@10 -- # set +x 00:29:47.040 04:27:01 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:47.040 04:27:01 -- host/async_init.sh@26 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:29:47.040 04:27:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:47.040 04:27:01 -- common/autotest_common.sh@10 -- # set +x 00:29:47.040 [2024-05-14 04:27:01.496300] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:47.040 04:27:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:47.040 04:27:01 -- host/async_init.sh@27 -- # rpc_cmd bdev_null_create null0 1024 512 00:29:47.040 04:27:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:47.040 04:27:01 -- common/autotest_common.sh@10 -- # set +x 00:29:47.040 null0 00:29:47.040 04:27:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:47.040 04:27:01 -- host/async_init.sh@28 -- # rpc_cmd bdev_wait_for_examine 00:29:47.040 04:27:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:47.040 04:27:01 -- common/autotest_common.sh@10 -- # set +x 00:29:47.040 04:27:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:47.040 04:27:01 -- host/async_init.sh@29 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a 00:29:47.040 04:27:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:47.040 04:27:01 -- common/autotest_common.sh@10 -- # set +x 00:29:47.040 04:27:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:47.040 04:27:01 -- host/async_init.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 -g 91a5f38d620749089eb2d04fc5ac2f7e 00:29:47.040 04:27:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:47.040 04:27:01 -- common/autotest_common.sh@10 -- # set +x 00:29:47.040 04:27:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:47.040 04:27:01 -- host/async_init.sh@31 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:29:47.040 04:27:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:47.040 04:27:01 -- common/autotest_common.sh@10 -- # set +x 00:29:47.040 [2024-05-14 04:27:01.536470] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:47.040 04:27:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:47.040 04:27:01 -- host/async_init.sh@37 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode0 00:29:47.040 04:27:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:47.040 04:27:01 -- common/autotest_common.sh@10 -- # set +x 00:29:47.300 nvme0n1 00:29:47.300 04:27:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:47.300 04:27:01 -- host/async_init.sh@41 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:29:47.300 04:27:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:47.300 04:27:01 -- common/autotest_common.sh@10 -- # set +x 00:29:47.300 [ 00:29:47.300 { 00:29:47.300 "name": "nvme0n1", 00:29:47.300 "aliases": [ 00:29:47.300 "91a5f38d-6207-4908-9eb2-d04fc5ac2f7e" 00:29:47.300 ], 00:29:47.300 "product_name": "NVMe disk", 00:29:47.300 "block_size": 512, 00:29:47.300 "num_blocks": 2097152, 00:29:47.300 "uuid": "91a5f38d-6207-4908-9eb2-d04fc5ac2f7e", 00:29:47.300 "assigned_rate_limits": { 00:29:47.300 "rw_ios_per_sec": 0, 00:29:47.300 "rw_mbytes_per_sec": 0, 00:29:47.300 "r_mbytes_per_sec": 0, 00:29:47.300 "w_mbytes_per_sec": 0 00:29:47.300 }, 00:29:47.300 "claimed": false, 00:29:47.300 "zoned": false, 00:29:47.300 "supported_io_types": { 00:29:47.300 "read": true, 00:29:47.300 "write": true, 00:29:47.300 "unmap": false, 00:29:47.300 "write_zeroes": true, 00:29:47.300 "flush": true, 00:29:47.300 "reset": true, 00:29:47.300 "compare": true, 00:29:47.300 "compare_and_write": true, 00:29:47.300 "abort": true, 00:29:47.300 "nvme_admin": true, 00:29:47.300 "nvme_io": true 00:29:47.300 }, 00:29:47.300 "driver_specific": { 00:29:47.300 "nvme": [ 00:29:47.300 { 00:29:47.300 "trid": { 00:29:47.300 "trtype": "TCP", 00:29:47.300 "adrfam": "IPv4", 00:29:47.300 "traddr": "10.0.0.2", 00:29:47.300 "trsvcid": "4420", 00:29:47.300 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:29:47.300 }, 00:29:47.300 "ctrlr_data": { 00:29:47.300 "cntlid": 1, 00:29:47.300 "vendor_id": "0x8086", 00:29:47.300 "model_number": "SPDK bdev Controller", 00:29:47.300 "serial_number": "00000000000000000000", 00:29:47.300 "firmware_revision": "24.01.1", 00:29:47.300 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:29:47.300 "oacs": { 00:29:47.300 "security": 0, 00:29:47.300 "format": 0, 00:29:47.300 "firmware": 0, 00:29:47.300 "ns_manage": 0 00:29:47.300 }, 00:29:47.300 "multi_ctrlr": true, 00:29:47.300 "ana_reporting": false 00:29:47.300 }, 00:29:47.300 "vs": { 00:29:47.300 "nvme_version": "1.3" 00:29:47.300 }, 00:29:47.300 "ns_data": { 00:29:47.300 "id": 1, 00:29:47.300 "can_share": true 00:29:47.300 } 00:29:47.300 } 00:29:47.300 ], 00:29:47.300 "mp_policy": "active_passive" 00:29:47.300 } 00:29:47.300 } 00:29:47.300 ] 00:29:47.300 04:27:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:47.300 04:27:01 -- host/async_init.sh@44 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:29:47.300 04:27:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:47.300 04:27:01 -- common/autotest_common.sh@10 -- # set +x 00:29:47.300 [2024-05-14 04:27:01.788372] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:29:47.300 [2024-05-14 04:27:01.788464] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x613000003bc0 (9): Bad file descriptor 00:29:47.561 [2024-05-14 04:27:01.961301] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:29:47.561 04:27:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:47.561 04:27:01 -- host/async_init.sh@47 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:29:47.561 04:27:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:47.561 04:27:01 -- common/autotest_common.sh@10 -- # set +x 00:29:47.561 [ 00:29:47.561 { 00:29:47.561 "name": "nvme0n1", 00:29:47.561 "aliases": [ 00:29:47.561 "91a5f38d-6207-4908-9eb2-d04fc5ac2f7e" 00:29:47.561 ], 00:29:47.561 "product_name": "NVMe disk", 00:29:47.561 "block_size": 512, 00:29:47.561 "num_blocks": 2097152, 00:29:47.561 "uuid": "91a5f38d-6207-4908-9eb2-d04fc5ac2f7e", 00:29:47.561 "assigned_rate_limits": { 00:29:47.561 "rw_ios_per_sec": 0, 00:29:47.561 "rw_mbytes_per_sec": 0, 00:29:47.561 "r_mbytes_per_sec": 0, 00:29:47.561 "w_mbytes_per_sec": 0 00:29:47.561 }, 00:29:47.561 "claimed": false, 00:29:47.561 "zoned": false, 00:29:47.561 "supported_io_types": { 00:29:47.561 "read": true, 00:29:47.561 "write": true, 00:29:47.561 "unmap": false, 00:29:47.561 "write_zeroes": true, 00:29:47.561 "flush": true, 00:29:47.561 "reset": true, 00:29:47.561 "compare": true, 00:29:47.561 "compare_and_write": true, 00:29:47.561 "abort": true, 00:29:47.561 "nvme_admin": true, 00:29:47.561 "nvme_io": true 00:29:47.561 }, 00:29:47.561 "driver_specific": { 00:29:47.561 "nvme": [ 00:29:47.561 { 00:29:47.561 "trid": { 00:29:47.561 "trtype": "TCP", 00:29:47.561 "adrfam": "IPv4", 00:29:47.561 "traddr": "10.0.0.2", 00:29:47.561 "trsvcid": "4420", 00:29:47.561 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:29:47.561 }, 00:29:47.561 "ctrlr_data": { 00:29:47.561 "cntlid": 2, 00:29:47.561 "vendor_id": "0x8086", 00:29:47.561 "model_number": "SPDK bdev Controller", 00:29:47.561 "serial_number": "00000000000000000000", 00:29:47.561 "firmware_revision": "24.01.1", 00:29:47.561 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:29:47.561 "oacs": { 00:29:47.561 "security": 0, 00:29:47.561 "format": 0, 00:29:47.561 "firmware": 0, 00:29:47.561 "ns_manage": 0 00:29:47.561 }, 00:29:47.561 "multi_ctrlr": true, 00:29:47.561 "ana_reporting": false 00:29:47.561 }, 00:29:47.561 "vs": { 00:29:47.561 "nvme_version": "1.3" 00:29:47.561 }, 00:29:47.561 "ns_data": { 00:29:47.561 "id": 1, 00:29:47.561 "can_share": true 00:29:47.561 } 00:29:47.561 } 00:29:47.561 ], 00:29:47.561 "mp_policy": "active_passive" 00:29:47.561 } 00:29:47.561 } 00:29:47.561 ] 00:29:47.561 04:27:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:47.561 04:27:01 -- host/async_init.sh@50 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:47.561 04:27:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:47.561 04:27:01 -- common/autotest_common.sh@10 -- # set +x 00:29:47.561 04:27:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:47.561 04:27:01 -- host/async_init.sh@53 -- # mktemp 00:29:47.561 04:27:01 -- host/async_init.sh@53 -- # key_path=/tmp/tmp.pgbgdxunUC 00:29:47.561 04:27:01 -- host/async_init.sh@54 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:29:47.561 04:27:01 -- host/async_init.sh@55 -- # chmod 0600 /tmp/tmp.pgbgdxunUC 00:29:47.561 04:27:02 -- host/async_init.sh@56 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode0 --disable 00:29:47.561 04:27:02 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:47.561 04:27:02 -- common/autotest_common.sh@10 -- # set +x 00:29:47.561 04:27:02 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:47.561 04:27:02 -- host/async_init.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 --secure-channel 00:29:47.561 04:27:02 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:47.561 04:27:02 -- common/autotest_common.sh@10 -- # set +x 00:29:47.561 [2024-05-14 04:27:02.016585] tcp.c: 912:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:29:47.561 [2024-05-14 04:27:02.016780] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:29:47.561 04:27:02 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:47.561 04:27:02 -- host/async_init.sh@59 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.pgbgdxunUC 00:29:47.561 04:27:02 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:47.561 04:27:02 -- common/autotest_common.sh@10 -- # set +x 00:29:47.561 04:27:02 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:47.561 04:27:02 -- host/async_init.sh@65 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4421 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.pgbgdxunUC 00:29:47.561 04:27:02 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:47.561 04:27:02 -- common/autotest_common.sh@10 -- # set +x 00:29:47.561 [2024-05-14 04:27:02.032525] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:29:47.561 nvme0n1 00:29:47.561 04:27:02 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:47.561 04:27:02 -- host/async_init.sh@69 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:29:47.561 04:27:02 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:47.561 04:27:02 -- common/autotest_common.sh@10 -- # set +x 00:29:47.561 [ 00:29:47.561 { 00:29:47.561 "name": "nvme0n1", 00:29:47.561 "aliases": [ 00:29:47.561 "91a5f38d-6207-4908-9eb2-d04fc5ac2f7e" 00:29:47.561 ], 00:29:47.561 "product_name": "NVMe disk", 00:29:47.561 "block_size": 512, 00:29:47.561 "num_blocks": 2097152, 00:29:47.561 "uuid": "91a5f38d-6207-4908-9eb2-d04fc5ac2f7e", 00:29:47.561 "assigned_rate_limits": { 00:29:47.561 "rw_ios_per_sec": 0, 00:29:47.561 "rw_mbytes_per_sec": 0, 00:29:47.561 "r_mbytes_per_sec": 0, 00:29:47.561 "w_mbytes_per_sec": 0 00:29:47.561 }, 00:29:47.561 "claimed": false, 00:29:47.561 "zoned": false, 00:29:47.561 "supported_io_types": { 00:29:47.561 "read": true, 00:29:47.561 "write": true, 00:29:47.561 "unmap": false, 00:29:47.561 "write_zeroes": true, 00:29:47.561 "flush": true, 00:29:47.561 "reset": true, 00:29:47.561 "compare": true, 00:29:47.561 "compare_and_write": true, 00:29:47.561 "abort": true, 00:29:47.561 "nvme_admin": true, 00:29:47.561 "nvme_io": true 00:29:47.561 }, 00:29:47.561 "driver_specific": { 00:29:47.561 "nvme": [ 00:29:47.561 { 00:29:47.561 "trid": { 00:29:47.561 "trtype": "TCP", 00:29:47.561 "adrfam": "IPv4", 00:29:47.561 "traddr": "10.0.0.2", 00:29:47.561 "trsvcid": "4421", 00:29:47.561 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:29:47.561 }, 00:29:47.561 "ctrlr_data": { 00:29:47.561 "cntlid": 3, 00:29:47.561 "vendor_id": "0x8086", 00:29:47.561 "model_number": "SPDK bdev Controller", 00:29:47.561 "serial_number": "00000000000000000000", 00:29:47.561 "firmware_revision": "24.01.1", 00:29:47.561 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:29:47.561 "oacs": { 00:29:47.561 "security": 0, 00:29:47.561 "format": 0, 00:29:47.561 "firmware": 0, 00:29:47.561 "ns_manage": 0 00:29:47.561 }, 00:29:47.561 "multi_ctrlr": true, 00:29:47.561 "ana_reporting": false 00:29:47.561 }, 00:29:47.561 "vs": { 00:29:47.561 "nvme_version": "1.3" 00:29:47.561 }, 00:29:47.561 "ns_data": { 00:29:47.561 "id": 1, 00:29:47.561 "can_share": true 00:29:47.561 } 00:29:47.561 } 00:29:47.561 ], 00:29:47.561 "mp_policy": "active_passive" 00:29:47.561 } 00:29:47.561 } 00:29:47.561 ] 00:29:47.561 04:27:02 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:47.561 04:27:02 -- host/async_init.sh@72 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:47.561 04:27:02 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:47.561 04:27:02 -- common/autotest_common.sh@10 -- # set +x 00:29:47.561 04:27:02 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:47.561 04:27:02 -- host/async_init.sh@75 -- # rm -f /tmp/tmp.pgbgdxunUC 00:29:47.561 04:27:02 -- host/async_init.sh@77 -- # trap - SIGINT SIGTERM EXIT 00:29:47.561 04:27:02 -- host/async_init.sh@78 -- # nvmftestfini 00:29:47.561 04:27:02 -- nvmf/common.sh@476 -- # nvmfcleanup 00:29:47.561 04:27:02 -- nvmf/common.sh@116 -- # sync 00:29:47.561 04:27:02 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:29:47.561 04:27:02 -- nvmf/common.sh@119 -- # set +e 00:29:47.561 04:27:02 -- nvmf/common.sh@120 -- # for i in {1..20} 00:29:47.561 04:27:02 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:29:47.822 rmmod nvme_tcp 00:29:47.822 rmmod nvme_fabrics 00:29:47.822 rmmod nvme_keyring 00:29:47.822 04:27:02 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:29:47.822 04:27:02 -- nvmf/common.sh@123 -- # set -e 00:29:47.822 04:27:02 -- nvmf/common.sh@124 -- # return 0 00:29:47.822 04:27:02 -- nvmf/common.sh@477 -- # '[' -n 4175497 ']' 00:29:47.822 04:27:02 -- nvmf/common.sh@478 -- # killprocess 4175497 00:29:47.822 04:27:02 -- common/autotest_common.sh@926 -- # '[' -z 4175497 ']' 00:29:47.822 04:27:02 -- common/autotest_common.sh@930 -- # kill -0 4175497 00:29:47.822 04:27:02 -- common/autotest_common.sh@931 -- # uname 00:29:47.822 04:27:02 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:29:47.822 04:27:02 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 4175497 00:29:47.822 04:27:02 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:29:47.822 04:27:02 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:29:47.822 04:27:02 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 4175497' 00:29:47.822 killing process with pid 4175497 00:29:47.822 04:27:02 -- common/autotest_common.sh@945 -- # kill 4175497 00:29:47.822 04:27:02 -- common/autotest_common.sh@950 -- # wait 4175497 00:29:48.391 04:27:02 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:29:48.391 04:27:02 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:29:48.391 04:27:02 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:29:48.391 04:27:02 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:29:48.391 04:27:02 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:29:48.391 04:27:02 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:48.391 04:27:02 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:29:48.391 04:27:02 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:50.296 04:27:04 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:29:50.296 00:29:50.296 real 0m10.140s 00:29:50.296 user 0m3.684s 00:29:50.296 sys 0m4.840s 00:29:50.296 04:27:04 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:50.296 04:27:04 -- common/autotest_common.sh@10 -- # set +x 00:29:50.296 ************************************ 00:29:50.296 END TEST nvmf_async_init 00:29:50.296 ************************************ 00:29:50.296 04:27:04 -- nvmf/nvmf.sh@93 -- # run_test dma /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:29:50.296 04:27:04 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:29:50.296 04:27:04 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:29:50.296 04:27:04 -- common/autotest_common.sh@10 -- # set +x 00:29:50.296 ************************************ 00:29:50.296 START TEST dma 00:29:50.296 ************************************ 00:29:50.296 04:27:04 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:29:50.554 * Looking for test storage... 00:29:50.554 * Found test storage at /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/host 00:29:50.554 04:27:04 -- host/dma.sh@10 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/common.sh 00:29:50.554 04:27:04 -- nvmf/common.sh@7 -- # uname -s 00:29:50.554 04:27:04 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:50.554 04:27:04 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:50.554 04:27:04 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:50.554 04:27:04 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:50.554 04:27:04 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:50.554 04:27:04 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:50.554 04:27:04 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:50.554 04:27:04 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:50.554 04:27:04 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:50.554 04:27:04 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:50.554 04:27:04 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80ef6226-405e-ee11-906e-a4bf01973fda 00:29:50.554 04:27:04 -- nvmf/common.sh@18 -- # NVME_HOSTID=80ef6226-405e-ee11-906e-a4bf01973fda 00:29:50.554 04:27:04 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:50.554 04:27:04 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:50.554 04:27:04 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:29:50.554 04:27:04 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/common.sh 00:29:50.554 04:27:04 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:50.554 04:27:04 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:50.554 04:27:04 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:50.554 04:27:04 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:50.554 04:27:04 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:50.554 04:27:04 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:50.554 04:27:04 -- paths/export.sh@5 -- # export PATH 00:29:50.555 04:27:04 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:50.555 04:27:04 -- nvmf/common.sh@46 -- # : 0 00:29:50.555 04:27:04 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:29:50.555 04:27:04 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:29:50.555 04:27:04 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:29:50.555 04:27:04 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:50.555 04:27:04 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:50.555 04:27:04 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:29:50.555 04:27:04 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:29:50.555 04:27:04 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:29:50.555 04:27:04 -- host/dma.sh@12 -- # '[' tcp '!=' rdma ']' 00:29:50.555 04:27:04 -- host/dma.sh@13 -- # exit 0 00:29:50.555 00:29:50.555 real 0m0.079s 00:29:50.555 user 0m0.034s 00:29:50.555 sys 0m0.051s 00:29:50.555 04:27:04 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:50.555 04:27:04 -- common/autotest_common.sh@10 -- # set +x 00:29:50.555 ************************************ 00:29:50.555 END TEST dma 00:29:50.555 ************************************ 00:29:50.555 04:27:04 -- nvmf/nvmf.sh@96 -- # run_test nvmf_identify /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:29:50.555 04:27:04 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:29:50.555 04:27:04 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:29:50.555 04:27:04 -- common/autotest_common.sh@10 -- # set +x 00:29:50.555 ************************************ 00:29:50.555 START TEST nvmf_identify 00:29:50.555 ************************************ 00:29:50.555 04:27:04 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:29:50.555 * Looking for test storage... 00:29:50.555 * Found test storage at /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/host 00:29:50.555 04:27:05 -- host/identify.sh@9 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/common.sh 00:29:50.555 04:27:05 -- nvmf/common.sh@7 -- # uname -s 00:29:50.555 04:27:05 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:50.555 04:27:05 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:50.555 04:27:05 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:50.555 04:27:05 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:50.555 04:27:05 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:50.555 04:27:05 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:50.555 04:27:05 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:50.555 04:27:05 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:50.555 04:27:05 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:50.555 04:27:05 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:50.555 04:27:05 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80ef6226-405e-ee11-906e-a4bf01973fda 00:29:50.555 04:27:05 -- nvmf/common.sh@18 -- # NVME_HOSTID=80ef6226-405e-ee11-906e-a4bf01973fda 00:29:50.555 04:27:05 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:50.555 04:27:05 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:50.555 04:27:05 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:29:50.555 04:27:05 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/common.sh 00:29:50.555 04:27:05 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:50.555 04:27:05 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:50.555 04:27:05 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:50.555 04:27:05 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:50.555 04:27:05 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:50.555 04:27:05 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:50.555 04:27:05 -- paths/export.sh@5 -- # export PATH 00:29:50.555 04:27:05 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:50.555 04:27:05 -- nvmf/common.sh@46 -- # : 0 00:29:50.555 04:27:05 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:29:50.555 04:27:05 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:29:50.555 04:27:05 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:29:50.555 04:27:05 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:50.555 04:27:05 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:50.555 04:27:05 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:29:50.555 04:27:05 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:29:50.555 04:27:05 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:29:50.555 04:27:05 -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:29:50.555 04:27:05 -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:29:50.555 04:27:05 -- host/identify.sh@14 -- # nvmftestinit 00:29:50.555 04:27:05 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:29:50.555 04:27:05 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:50.555 04:27:05 -- nvmf/common.sh@436 -- # prepare_net_devs 00:29:50.555 04:27:05 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:29:50.555 04:27:05 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:29:50.555 04:27:05 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:50.555 04:27:05 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:29:50.555 04:27:05 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:50.555 04:27:05 -- nvmf/common.sh@402 -- # [[ phy-fallback != virt ]] 00:29:50.555 04:27:05 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:29:50.555 04:27:05 -- nvmf/common.sh@284 -- # xtrace_disable 00:29:50.555 04:27:05 -- common/autotest_common.sh@10 -- # set +x 00:29:55.831 04:27:10 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:29:55.831 04:27:10 -- nvmf/common.sh@290 -- # pci_devs=() 00:29:55.831 04:27:10 -- nvmf/common.sh@290 -- # local -a pci_devs 00:29:55.831 04:27:10 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:29:55.831 04:27:10 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:29:55.831 04:27:10 -- nvmf/common.sh@292 -- # pci_drivers=() 00:29:55.831 04:27:10 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:29:55.831 04:27:10 -- nvmf/common.sh@294 -- # net_devs=() 00:29:55.831 04:27:10 -- nvmf/common.sh@294 -- # local -ga net_devs 00:29:55.831 04:27:10 -- nvmf/common.sh@295 -- # e810=() 00:29:55.831 04:27:10 -- nvmf/common.sh@295 -- # local -ga e810 00:29:55.831 04:27:10 -- nvmf/common.sh@296 -- # x722=() 00:29:55.831 04:27:10 -- nvmf/common.sh@296 -- # local -ga x722 00:29:55.831 04:27:10 -- nvmf/common.sh@297 -- # mlx=() 00:29:55.831 04:27:10 -- nvmf/common.sh@297 -- # local -ga mlx 00:29:55.831 04:27:10 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:55.831 04:27:10 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:55.831 04:27:10 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:55.831 04:27:10 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:55.831 04:27:10 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:55.831 04:27:10 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:55.831 04:27:10 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:55.831 04:27:10 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:55.831 04:27:10 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:55.831 04:27:10 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:55.831 04:27:10 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:55.831 04:27:10 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:29:55.831 04:27:10 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:29:55.831 04:27:10 -- nvmf/common.sh@326 -- # [[ '' == mlx5 ]] 00:29:55.831 04:27:10 -- nvmf/common.sh@328 -- # [[ '' == e810 ]] 00:29:55.831 04:27:10 -- nvmf/common.sh@330 -- # [[ '' == x722 ]] 00:29:55.831 04:27:10 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:29:55.831 04:27:10 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:29:55.831 04:27:10 -- nvmf/common.sh@340 -- # echo 'Found 0000:27:00.0 (0x8086 - 0x159b)' 00:29:55.831 Found 0000:27:00.0 (0x8086 - 0x159b) 00:29:55.831 04:27:10 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:29:55.831 04:27:10 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:29:55.831 04:27:10 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:55.831 04:27:10 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:55.831 04:27:10 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:29:55.831 04:27:10 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:29:55.831 04:27:10 -- nvmf/common.sh@340 -- # echo 'Found 0000:27:00.1 (0x8086 - 0x159b)' 00:29:55.831 Found 0000:27:00.1 (0x8086 - 0x159b) 00:29:55.831 04:27:10 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:29:55.831 04:27:10 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:29:55.831 04:27:10 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:55.831 04:27:10 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:55.831 04:27:10 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:29:55.831 04:27:10 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:29:55.831 04:27:10 -- nvmf/common.sh@371 -- # [[ '' == e810 ]] 00:29:55.831 04:27:10 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:29:55.831 04:27:10 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:55.831 04:27:10 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:29:55.831 04:27:10 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:55.831 04:27:10 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:27:00.0: cvl_0_0' 00:29:55.831 Found net devices under 0000:27:00.0: cvl_0_0 00:29:55.831 04:27:10 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:29:55.831 04:27:10 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:29:55.831 04:27:10 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:55.831 04:27:10 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:29:55.831 04:27:10 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:55.831 04:27:10 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:27:00.1: cvl_0_1' 00:29:55.831 Found net devices under 0000:27:00.1: cvl_0_1 00:29:55.831 04:27:10 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:29:55.831 04:27:10 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:29:55.831 04:27:10 -- nvmf/common.sh@402 -- # is_hw=yes 00:29:55.831 04:27:10 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:29:55.831 04:27:10 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:29:55.831 04:27:10 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:29:55.831 04:27:10 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:55.831 04:27:10 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:55.831 04:27:10 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:55.831 04:27:10 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:29:55.831 04:27:10 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:55.831 04:27:10 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:55.831 04:27:10 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:29:55.831 04:27:10 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:55.831 04:27:10 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:55.831 04:27:10 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:29:55.831 04:27:10 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:29:55.831 04:27:10 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:29:55.831 04:27:10 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:55.831 04:27:10 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:55.831 04:27:10 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:55.831 04:27:10 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:29:55.831 04:27:10 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:55.831 04:27:10 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:56.093 04:27:10 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:56.093 04:27:10 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:29:56.093 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:56.093 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.319 ms 00:29:56.093 00:29:56.093 --- 10.0.0.2 ping statistics --- 00:29:56.093 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:56.093 rtt min/avg/max/mdev = 0.319/0.319/0.319/0.000 ms 00:29:56.093 04:27:10 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:56.093 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:56.093 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.086 ms 00:29:56.093 00:29:56.093 --- 10.0.0.1 ping statistics --- 00:29:56.093 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:56.093 rtt min/avg/max/mdev = 0.086/0.086/0.086/0.000 ms 00:29:56.093 04:27:10 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:56.093 04:27:10 -- nvmf/common.sh@410 -- # return 0 00:29:56.093 04:27:10 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:29:56.093 04:27:10 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:56.093 04:27:10 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:29:56.093 04:27:10 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:29:56.093 04:27:10 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:56.093 04:27:10 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:29:56.093 04:27:10 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:29:56.093 04:27:10 -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:29:56.093 04:27:10 -- common/autotest_common.sh@712 -- # xtrace_disable 00:29:56.093 04:27:10 -- common/autotest_common.sh@10 -- # set +x 00:29:56.093 04:27:10 -- host/identify.sh@19 -- # nvmfpid=4180385 00:29:56.093 04:27:10 -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:29:56.093 04:27:10 -- host/identify.sh@23 -- # waitforlisten 4180385 00:29:56.093 04:27:10 -- common/autotest_common.sh@819 -- # '[' -z 4180385 ']' 00:29:56.093 04:27:10 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:56.093 04:27:10 -- common/autotest_common.sh@824 -- # local max_retries=100 00:29:56.093 04:27:10 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:56.093 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:56.093 04:27:10 -- common/autotest_common.sh@828 -- # xtrace_disable 00:29:56.093 04:27:10 -- common/autotest_common.sh@10 -- # set +x 00:29:56.093 04:27:10 -- host/identify.sh@18 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:29:56.093 [2024-05-14 04:27:10.559029] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:29:56.093 [2024-05-14 04:27:10.559161] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:56.093 EAL: No free 2048 kB hugepages reported on node 1 00:29:56.354 [2024-05-14 04:27:10.697914] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:29:56.354 [2024-05-14 04:27:10.793561] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:29:56.354 [2024-05-14 04:27:10.793755] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:56.354 [2024-05-14 04:27:10.793769] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:56.354 [2024-05-14 04:27:10.793779] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:56.354 [2024-05-14 04:27:10.796216] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:29:56.354 [2024-05-14 04:27:10.796230] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:29:56.354 [2024-05-14 04:27:10.796257] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:29:56.354 [2024-05-14 04:27:10.796269] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:29:56.925 04:27:11 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:29:56.925 04:27:11 -- common/autotest_common.sh@852 -- # return 0 00:29:56.925 04:27:11 -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:29:56.925 04:27:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:56.925 04:27:11 -- common/autotest_common.sh@10 -- # set +x 00:29:56.925 [2024-05-14 04:27:11.277938] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:56.925 04:27:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:56.925 04:27:11 -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:29:56.925 04:27:11 -- common/autotest_common.sh@718 -- # xtrace_disable 00:29:56.925 04:27:11 -- common/autotest_common.sh@10 -- # set +x 00:29:56.925 04:27:11 -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:29:56.925 04:27:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:56.925 04:27:11 -- common/autotest_common.sh@10 -- # set +x 00:29:56.925 Malloc0 00:29:56.925 04:27:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:56.925 04:27:11 -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:29:56.925 04:27:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:56.925 04:27:11 -- common/autotest_common.sh@10 -- # set +x 00:29:56.925 04:27:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:56.926 04:27:11 -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:29:56.926 04:27:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:56.926 04:27:11 -- common/autotest_common.sh@10 -- # set +x 00:29:56.926 04:27:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:56.926 04:27:11 -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:56.926 04:27:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:56.926 04:27:11 -- common/autotest_common.sh@10 -- # set +x 00:29:56.926 [2024-05-14 04:27:11.374269] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:56.926 04:27:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:56.926 04:27:11 -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:29:56.926 04:27:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:56.926 04:27:11 -- common/autotest_common.sh@10 -- # set +x 00:29:56.926 04:27:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:56.926 04:27:11 -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:29:56.926 04:27:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:56.926 04:27:11 -- common/autotest_common.sh@10 -- # set +x 00:29:56.926 [2024-05-14 04:27:11.389989] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:29:56.926 [ 00:29:56.926 { 00:29:56.926 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:29:56.926 "subtype": "Discovery", 00:29:56.926 "listen_addresses": [ 00:29:56.926 { 00:29:56.926 "transport": "TCP", 00:29:56.926 "trtype": "TCP", 00:29:56.926 "adrfam": "IPv4", 00:29:56.926 "traddr": "10.0.0.2", 00:29:56.926 "trsvcid": "4420" 00:29:56.926 } 00:29:56.926 ], 00:29:56.926 "allow_any_host": true, 00:29:56.926 "hosts": [] 00:29:56.926 }, 00:29:56.926 { 00:29:56.926 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:29:56.926 "subtype": "NVMe", 00:29:56.926 "listen_addresses": [ 00:29:56.926 { 00:29:56.926 "transport": "TCP", 00:29:56.926 "trtype": "TCP", 00:29:56.926 "adrfam": "IPv4", 00:29:56.926 "traddr": "10.0.0.2", 00:29:56.926 "trsvcid": "4420" 00:29:56.926 } 00:29:56.926 ], 00:29:56.926 "allow_any_host": true, 00:29:56.926 "hosts": [], 00:29:56.926 "serial_number": "SPDK00000000000001", 00:29:56.926 "model_number": "SPDK bdev Controller", 00:29:56.926 "max_namespaces": 32, 00:29:56.926 "min_cntlid": 1, 00:29:56.926 "max_cntlid": 65519, 00:29:56.926 "namespaces": [ 00:29:56.926 { 00:29:56.926 "nsid": 1, 00:29:56.926 "bdev_name": "Malloc0", 00:29:56.926 "name": "Malloc0", 00:29:56.926 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:29:56.926 "eui64": "ABCDEF0123456789", 00:29:56.926 "uuid": "1a90f777-f6bf-43f8-83e4-166b941f8bca" 00:29:56.926 } 00:29:56.926 ] 00:29:56.926 } 00:29:56.926 ] 00:29:56.926 04:27:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:56.926 04:27:11 -- host/identify.sh@39 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:29:56.926 [2024-05-14 04:27:11.426286] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:29:56.926 [2024-05-14 04:27:11.426347] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4180520 ] 00:29:56.926 EAL: No free 2048 kB hugepages reported on node 1 00:29:56.926 [2024-05-14 04:27:11.470116] nvme_ctrlr.c:1477:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to connect adminq (no timeout) 00:29:56.926 [2024-05-14 04:27:11.470195] nvme_tcp.c:2244:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:29:56.926 [2024-05-14 04:27:11.470208] nvme_tcp.c:2248:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:29:56.926 [2024-05-14 04:27:11.470225] nvme_tcp.c:2266:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:29:56.926 [2024-05-14 04:27:11.470237] sock.c: 334:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:29:56.926 [2024-05-14 04:27:11.470597] nvme_ctrlr.c:1477:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for connect adminq (no timeout) 00:29:56.926 [2024-05-14 04:27:11.470635] nvme_tcp.c:1487:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x613000001fc0 0 00:29:56.926 [2024-05-14 04:27:11.485198] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:29:56.926 [2024-05-14 04:27:11.485216] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:29:56.926 [2024-05-14 04:27:11.485222] nvme_tcp.c:1533:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:29:56.926 [2024-05-14 04:27:11.485228] nvme_tcp.c:1534:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:29:56.926 [2024-05-14 04:27:11.485270] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:56.926 [2024-05-14 04:27:11.485277] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:56.926 [2024-05-14 04:27:11.485285] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x613000001fc0) 00:29:56.926 [2024-05-14 04:27:11.485306] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:29:56.926 [2024-05-14 04:27:11.485330] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:29:56.926 [2024-05-14 04:27:11.493201] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:56.926 [2024-05-14 04:27:11.493217] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:56.926 [2024-05-14 04:27:11.493222] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:56.926 [2024-05-14 04:27:11.493229] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x613000001fc0 00:29:56.926 [2024-05-14 04:27:11.493241] nvme_fabric.c: 620:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:29:56.926 [2024-05-14 04:27:11.493253] nvme_ctrlr.c:1477:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs (no timeout) 00:29:56.926 [2024-05-14 04:27:11.493260] nvme_ctrlr.c:1477:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs wait for vs (no timeout) 00:29:56.926 [2024-05-14 04:27:11.493276] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:56.926 [2024-05-14 04:27:11.493285] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:56.926 [2024-05-14 04:27:11.493290] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x613000001fc0) 00:29:56.926 [2024-05-14 04:27:11.493305] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:56.926 [2024-05-14 04:27:11.493323] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:29:56.926 [2024-05-14 04:27:11.493457] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:56.926 [2024-05-14 04:27:11.493465] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:56.926 [2024-05-14 04:27:11.493476] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:56.926 [2024-05-14 04:27:11.493482] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x613000001fc0 00:29:56.926 [2024-05-14 04:27:11.493490] nvme_ctrlr.c:1477:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap (no timeout) 00:29:56.926 [2024-05-14 04:27:11.493500] nvme_ctrlr.c:1477:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap wait for cap (no timeout) 00:29:56.926 [2024-05-14 04:27:11.493508] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:56.926 [2024-05-14 04:27:11.493512] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:56.926 [2024-05-14 04:27:11.493517] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x613000001fc0) 00:29:56.926 [2024-05-14 04:27:11.493531] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:56.926 [2024-05-14 04:27:11.493542] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:29:56.926 [2024-05-14 04:27:11.493633] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:56.926 [2024-05-14 04:27:11.493643] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:56.926 [2024-05-14 04:27:11.493647] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:56.926 [2024-05-14 04:27:11.493651] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x613000001fc0 00:29:56.926 [2024-05-14 04:27:11.493658] nvme_ctrlr.c:1477:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en (no timeout) 00:29:56.926 [2024-05-14 04:27:11.493666] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en wait for cc (timeout 15000 ms) 00:29:56.926 [2024-05-14 04:27:11.493674] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:56.926 [2024-05-14 04:27:11.493678] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:56.926 [2024-05-14 04:27:11.493685] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x613000001fc0) 00:29:56.926 [2024-05-14 04:27:11.493694] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:56.926 [2024-05-14 04:27:11.493704] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:29:56.926 [2024-05-14 04:27:11.493782] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:56.926 [2024-05-14 04:27:11.493789] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:56.926 [2024-05-14 04:27:11.493794] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:56.926 [2024-05-14 04:27:11.493798] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x613000001fc0 00:29:56.926 [2024-05-14 04:27:11.493804] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:29:56.926 [2024-05-14 04:27:11.493817] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:56.926 [2024-05-14 04:27:11.493821] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:56.926 [2024-05-14 04:27:11.493826] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x613000001fc0) 00:29:56.926 [2024-05-14 04:27:11.493835] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:56.926 [2024-05-14 04:27:11.493845] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:29:56.926 [2024-05-14 04:27:11.493928] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:56.926 [2024-05-14 04:27:11.493935] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:56.926 [2024-05-14 04:27:11.493939] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:56.926 [2024-05-14 04:27:11.493943] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x613000001fc0 00:29:56.926 [2024-05-14 04:27:11.493952] nvme_ctrlr.c:3736:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 0 && CSTS.RDY = 0 00:29:56.926 [2024-05-14 04:27:11.493961] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to controller is disabled (timeout 15000 ms) 00:29:56.926 [2024-05-14 04:27:11.493969] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:29:56.927 [2024-05-14 04:27:11.494075] nvme_ctrlr.c:3929:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Setting CC.EN = 1 00:29:56.927 [2024-05-14 04:27:11.494081] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:29:56.927 [2024-05-14 04:27:11.494096] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:56.927 [2024-05-14 04:27:11.494101] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:56.927 [2024-05-14 04:27:11.494106] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x613000001fc0) 00:29:56.927 [2024-05-14 04:27:11.494115] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:56.927 [2024-05-14 04:27:11.494125] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:29:56.927 [2024-05-14 04:27:11.494208] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:56.927 [2024-05-14 04:27:11.494215] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:56.927 [2024-05-14 04:27:11.494220] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:56.927 [2024-05-14 04:27:11.494224] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x613000001fc0 00:29:56.927 [2024-05-14 04:27:11.494230] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:29:56.927 [2024-05-14 04:27:11.494241] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:56.927 [2024-05-14 04:27:11.494246] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:56.927 [2024-05-14 04:27:11.494251] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x613000001fc0) 00:29:56.927 [2024-05-14 04:27:11.494259] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:56.927 [2024-05-14 04:27:11.494270] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:29:56.927 [2024-05-14 04:27:11.494346] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:56.927 [2024-05-14 04:27:11.494353] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:56.927 [2024-05-14 04:27:11.494357] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:56.927 [2024-05-14 04:27:11.494361] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x613000001fc0 00:29:56.927 [2024-05-14 04:27:11.494367] nvme_ctrlr.c:3771:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:29:56.927 [2024-05-14 04:27:11.494374] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to reset admin queue (timeout 30000 ms) 00:29:56.927 [2024-05-14 04:27:11.494382] nvme_ctrlr.c:1477:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to identify controller (no timeout) 00:29:56.927 [2024-05-14 04:27:11.494390] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for identify controller (timeout 30000 ms) 00:29:56.927 [2024-05-14 04:27:11.494402] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:56.927 [2024-05-14 04:27:11.494408] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:56.927 [2024-05-14 04:27:11.494413] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x613000001fc0) 00:29:56.927 [2024-05-14 04:27:11.494422] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:56.927 [2024-05-14 04:27:11.494433] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:29:56.927 [2024-05-14 04:27:11.494552] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:29:56.927 [2024-05-14 04:27:11.494559] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:29:56.927 [2024-05-14 04:27:11.494563] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:29:56.927 [2024-05-14 04:27:11.494569] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x613000001fc0): datao=0, datal=4096, cccid=0 00:29:56.927 [2024-05-14 04:27:11.494576] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b100) on tqpair(0x613000001fc0): expected_datao=0, payload_size=4096 00:29:56.927 [2024-05-14 04:27:11.494590] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:29:56.927 [2024-05-14 04:27:11.494596] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:29:57.188 [2024-05-14 04:27:11.535423] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:57.188 [2024-05-14 04:27:11.535439] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:57.188 [2024-05-14 04:27:11.535443] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:57.188 [2024-05-14 04:27:11.535449] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x613000001fc0 00:29:57.188 [2024-05-14 04:27:11.535464] nvme_ctrlr.c:1971:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_xfer_size 4294967295 00:29:57.188 [2024-05-14 04:27:11.535472] nvme_ctrlr.c:1975:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] MDTS max_xfer_size 131072 00:29:57.188 [2024-05-14 04:27:11.535481] nvme_ctrlr.c:1978:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CNTLID 0x0001 00:29:57.188 [2024-05-14 04:27:11.535487] nvme_ctrlr.c:2002:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_sges 16 00:29:57.188 [2024-05-14 04:27:11.535495] nvme_ctrlr.c:2017:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] fuses compare and write: 1 00:29:57.188 [2024-05-14 04:27:11.535501] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to configure AER (timeout 30000 ms) 00:29:57.188 [2024-05-14 04:27:11.535510] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for configure aer (timeout 30000 ms) 00:29:57.188 [2024-05-14 04:27:11.535521] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:57.188 [2024-05-14 04:27:11.535526] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:57.188 [2024-05-14 04:27:11.535531] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x613000001fc0) 00:29:57.188 [2024-05-14 04:27:11.535543] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:29:57.188 [2024-05-14 04:27:11.535560] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:29:57.188 [2024-05-14 04:27:11.535652] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:57.188 [2024-05-14 04:27:11.535658] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:57.188 [2024-05-14 04:27:11.535662] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:57.188 [2024-05-14 04:27:11.535666] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x613000001fc0 00:29:57.188 [2024-05-14 04:27:11.535675] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:57.188 [2024-05-14 04:27:11.535679] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:57.188 [2024-05-14 04:27:11.535684] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x613000001fc0) 00:29:57.188 [2024-05-14 04:27:11.535693] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:29:57.188 [2024-05-14 04:27:11.535700] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:57.188 [2024-05-14 04:27:11.535704] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:57.188 [2024-05-14 04:27:11.535709] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x613000001fc0) 00:29:57.188 [2024-05-14 04:27:11.535716] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:57.188 [2024-05-14 04:27:11.535722] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:57.188 [2024-05-14 04:27:11.535726] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:57.188 [2024-05-14 04:27:11.535730] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x613000001fc0) 00:29:57.188 [2024-05-14 04:27:11.535739] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:57.188 [2024-05-14 04:27:11.535745] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:57.188 [2024-05-14 04:27:11.535749] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:57.188 [2024-05-14 04:27:11.535753] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x613000001fc0) 00:29:57.188 [2024-05-14 04:27:11.535760] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:57.188 [2024-05-14 04:27:11.535765] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to set keep alive timeout (timeout 30000 ms) 00:29:57.188 [2024-05-14 04:27:11.535775] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:29:57.188 [2024-05-14 04:27:11.535782] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:57.188 [2024-05-14 04:27:11.535787] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:57.188 [2024-05-14 04:27:11.535792] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x613000001fc0) 00:29:57.188 [2024-05-14 04:27:11.535803] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.188 [2024-05-14 04:27:11.535815] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:29:57.188 [2024-05-14 04:27:11.535820] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b260, cid 1, qid 0 00:29:57.188 [2024-05-14 04:27:11.535825] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b3c0, cid 2, qid 0 00:29:57.188 [2024-05-14 04:27:11.535830] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b520, cid 3, qid 0 00:29:57.188 [2024-05-14 04:27:11.535835] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b680, cid 4, qid 0 00:29:57.188 [2024-05-14 04:27:11.535954] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:57.188 [2024-05-14 04:27:11.535961] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:57.188 [2024-05-14 04:27:11.535965] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:57.188 [2024-05-14 04:27:11.535969] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b680) on tqpair=0x613000001fc0 00:29:57.188 [2024-05-14 04:27:11.535975] nvme_ctrlr.c:2889:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Sending keep alive every 5000000 us 00:29:57.188 [2024-05-14 04:27:11.535983] nvme_ctrlr.c:1477:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to ready (no timeout) 00:29:57.188 [2024-05-14 04:27:11.535996] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:57.188 [2024-05-14 04:27:11.536006] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:57.188 [2024-05-14 04:27:11.536011] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x613000001fc0) 00:29:57.188 [2024-05-14 04:27:11.536020] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.188 [2024-05-14 04:27:11.536031] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b680, cid 4, qid 0 00:29:57.188 [2024-05-14 04:27:11.536125] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:29:57.188 [2024-05-14 04:27:11.536136] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:29:57.188 [2024-05-14 04:27:11.536140] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:29:57.188 [2024-05-14 04:27:11.536146] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x613000001fc0): datao=0, datal=4096, cccid=4 00:29:57.188 [2024-05-14 04:27:11.536152] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b680) on tqpair(0x613000001fc0): expected_datao=0, payload_size=4096 00:29:57.188 [2024-05-14 04:27:11.536170] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:29:57.188 [2024-05-14 04:27:11.536176] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:29:57.188 [2024-05-14 04:27:11.536222] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:57.188 [2024-05-14 04:27:11.536229] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:57.188 [2024-05-14 04:27:11.536233] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:57.188 [2024-05-14 04:27:11.536238] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b680) on tqpair=0x613000001fc0 00:29:57.188 [2024-05-14 04:27:11.536253] nvme_ctrlr.c:4023:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Ctrlr already in ready state 00:29:57.188 [2024-05-14 04:27:11.536288] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:57.188 [2024-05-14 04:27:11.536294] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:57.188 [2024-05-14 04:27:11.536299] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x613000001fc0) 00:29:57.188 [2024-05-14 04:27:11.536308] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.188 [2024-05-14 04:27:11.536316] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:57.188 [2024-05-14 04:27:11.536320] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:57.188 [2024-05-14 04:27:11.536325] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x613000001fc0) 00:29:57.188 [2024-05-14 04:27:11.536333] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:29:57.189 [2024-05-14 04:27:11.536346] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b680, cid 4, qid 0 00:29:57.189 [2024-05-14 04:27:11.536352] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b7e0, cid 5, qid 0 00:29:57.189 [2024-05-14 04:27:11.536529] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:29:57.189 [2024-05-14 04:27:11.536536] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:29:57.189 [2024-05-14 04:27:11.536540] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:29:57.189 [2024-05-14 04:27:11.536546] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x613000001fc0): datao=0, datal=1024, cccid=4 00:29:57.189 [2024-05-14 04:27:11.536552] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b680) on tqpair(0x613000001fc0): expected_datao=0, payload_size=1024 00:29:57.189 [2024-05-14 04:27:11.536560] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:29:57.189 [2024-05-14 04:27:11.536565] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:29:57.189 [2024-05-14 04:27:11.536571] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:57.189 [2024-05-14 04:27:11.536578] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:57.189 [2024-05-14 04:27:11.536582] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:57.189 [2024-05-14 04:27:11.536587] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b7e0) on tqpair=0x613000001fc0 00:29:57.189 [2024-05-14 04:27:11.581196] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:57.189 [2024-05-14 04:27:11.581211] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:57.189 [2024-05-14 04:27:11.581215] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:57.189 [2024-05-14 04:27:11.581220] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b680) on tqpair=0x613000001fc0 00:29:57.189 [2024-05-14 04:27:11.581242] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:57.189 [2024-05-14 04:27:11.581248] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:57.189 [2024-05-14 04:27:11.581253] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x613000001fc0) 00:29:57.189 [2024-05-14 04:27:11.581264] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.189 [2024-05-14 04:27:11.581287] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b680, cid 4, qid 0 00:29:57.189 [2024-05-14 04:27:11.581420] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:29:57.189 [2024-05-14 04:27:11.581427] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:29:57.189 [2024-05-14 04:27:11.581431] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:29:57.189 [2024-05-14 04:27:11.581435] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x613000001fc0): datao=0, datal=3072, cccid=4 00:29:57.189 [2024-05-14 04:27:11.581441] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b680) on tqpair(0x613000001fc0): expected_datao=0, payload_size=3072 00:29:57.189 [2024-05-14 04:27:11.581449] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:29:57.189 [2024-05-14 04:27:11.581453] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:29:57.189 [2024-05-14 04:27:11.581477] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:57.189 [2024-05-14 04:27:11.581483] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:57.189 [2024-05-14 04:27:11.581487] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:57.189 [2024-05-14 04:27:11.581491] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b680) on tqpair=0x613000001fc0 00:29:57.189 [2024-05-14 04:27:11.581502] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:57.189 [2024-05-14 04:27:11.581507] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:57.189 [2024-05-14 04:27:11.581512] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x613000001fc0) 00:29:57.189 [2024-05-14 04:27:11.581521] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.189 [2024-05-14 04:27:11.581532] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b680, cid 4, qid 0 00:29:57.189 [2024-05-14 04:27:11.581639] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:29:57.189 [2024-05-14 04:27:11.581645] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:29:57.189 [2024-05-14 04:27:11.581649] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:29:57.189 [2024-05-14 04:27:11.581654] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x613000001fc0): datao=0, datal=8, cccid=4 00:29:57.189 [2024-05-14 04:27:11.581659] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b680) on tqpair(0x613000001fc0): expected_datao=0, payload_size=8 00:29:57.189 [2024-05-14 04:27:11.581666] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:29:57.189 [2024-05-14 04:27:11.581670] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:29:57.189 [2024-05-14 04:27:11.622384] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:57.189 [2024-05-14 04:27:11.622401] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:57.189 [2024-05-14 04:27:11.622406] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:57.189 [2024-05-14 04:27:11.622410] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b680) on tqpair=0x613000001fc0 00:29:57.189 ===================================================== 00:29:57.189 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2014-08.org.nvmexpress.discovery 00:29:57.189 ===================================================== 00:29:57.189 Controller Capabilities/Features 00:29:57.189 ================================ 00:29:57.189 Vendor ID: 0000 00:29:57.189 Subsystem Vendor ID: 0000 00:29:57.189 Serial Number: .................... 00:29:57.189 Model Number: ........................................ 00:29:57.189 Firmware Version: 24.01.1 00:29:57.189 Recommended Arb Burst: 0 00:29:57.189 IEEE OUI Identifier: 00 00 00 00:29:57.189 Multi-path I/O 00:29:57.189 May have multiple subsystem ports: No 00:29:57.189 May have multiple controllers: No 00:29:57.189 Associated with SR-IOV VF: No 00:29:57.189 Max Data Transfer Size: 131072 00:29:57.189 Max Number of Namespaces: 0 00:29:57.189 Max Number of I/O Queues: 1024 00:29:57.189 NVMe Specification Version (VS): 1.3 00:29:57.189 NVMe Specification Version (Identify): 1.3 00:29:57.189 Maximum Queue Entries: 128 00:29:57.189 Contiguous Queues Required: Yes 00:29:57.189 Arbitration Mechanisms Supported 00:29:57.189 Weighted Round Robin: Not Supported 00:29:57.189 Vendor Specific: Not Supported 00:29:57.189 Reset Timeout: 15000 ms 00:29:57.189 Doorbell Stride: 4 bytes 00:29:57.189 NVM Subsystem Reset: Not Supported 00:29:57.189 Command Sets Supported 00:29:57.189 NVM Command Set: Supported 00:29:57.189 Boot Partition: Not Supported 00:29:57.189 Memory Page Size Minimum: 4096 bytes 00:29:57.189 Memory Page Size Maximum: 4096 bytes 00:29:57.189 Persistent Memory Region: Not Supported 00:29:57.189 Optional Asynchronous Events Supported 00:29:57.189 Namespace Attribute Notices: Not Supported 00:29:57.189 Firmware Activation Notices: Not Supported 00:29:57.189 ANA Change Notices: Not Supported 00:29:57.189 PLE Aggregate Log Change Notices: Not Supported 00:29:57.189 LBA Status Info Alert Notices: Not Supported 00:29:57.189 EGE Aggregate Log Change Notices: Not Supported 00:29:57.189 Normal NVM Subsystem Shutdown event: Not Supported 00:29:57.189 Zone Descriptor Change Notices: Not Supported 00:29:57.189 Discovery Log Change Notices: Supported 00:29:57.189 Controller Attributes 00:29:57.189 128-bit Host Identifier: Not Supported 00:29:57.189 Non-Operational Permissive Mode: Not Supported 00:29:57.189 NVM Sets: Not Supported 00:29:57.189 Read Recovery Levels: Not Supported 00:29:57.189 Endurance Groups: Not Supported 00:29:57.189 Predictable Latency Mode: Not Supported 00:29:57.189 Traffic Based Keep ALive: Not Supported 00:29:57.189 Namespace Granularity: Not Supported 00:29:57.189 SQ Associations: Not Supported 00:29:57.189 UUID List: Not Supported 00:29:57.189 Multi-Domain Subsystem: Not Supported 00:29:57.189 Fixed Capacity Management: Not Supported 00:29:57.189 Variable Capacity Management: Not Supported 00:29:57.189 Delete Endurance Group: Not Supported 00:29:57.189 Delete NVM Set: Not Supported 00:29:57.189 Extended LBA Formats Supported: Not Supported 00:29:57.189 Flexible Data Placement Supported: Not Supported 00:29:57.189 00:29:57.189 Controller Memory Buffer Support 00:29:57.189 ================================ 00:29:57.189 Supported: No 00:29:57.189 00:29:57.189 Persistent Memory Region Support 00:29:57.189 ================================ 00:29:57.189 Supported: No 00:29:57.189 00:29:57.189 Admin Command Set Attributes 00:29:57.189 ============================ 00:29:57.189 Security Send/Receive: Not Supported 00:29:57.189 Format NVM: Not Supported 00:29:57.189 Firmware Activate/Download: Not Supported 00:29:57.189 Namespace Management: Not Supported 00:29:57.189 Device Self-Test: Not Supported 00:29:57.189 Directives: Not Supported 00:29:57.189 NVMe-MI: Not Supported 00:29:57.189 Virtualization Management: Not Supported 00:29:57.189 Doorbell Buffer Config: Not Supported 00:29:57.189 Get LBA Status Capability: Not Supported 00:29:57.189 Command & Feature Lockdown Capability: Not Supported 00:29:57.189 Abort Command Limit: 1 00:29:57.189 Async Event Request Limit: 4 00:29:57.189 Number of Firmware Slots: N/A 00:29:57.189 Firmware Slot 1 Read-Only: N/A 00:29:57.189 Firmware Activation Without Reset: N/A 00:29:57.189 Multiple Update Detection Support: N/A 00:29:57.189 Firmware Update Granularity: No Information Provided 00:29:57.189 Per-Namespace SMART Log: No 00:29:57.189 Asymmetric Namespace Access Log Page: Not Supported 00:29:57.189 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:29:57.189 Command Effects Log Page: Not Supported 00:29:57.189 Get Log Page Extended Data: Supported 00:29:57.189 Telemetry Log Pages: Not Supported 00:29:57.189 Persistent Event Log Pages: Not Supported 00:29:57.189 Supported Log Pages Log Page: May Support 00:29:57.189 Commands Supported & Effects Log Page: Not Supported 00:29:57.189 Feature Identifiers & Effects Log Page:May Support 00:29:57.189 NVMe-MI Commands & Effects Log Page: May Support 00:29:57.189 Data Area 4 for Telemetry Log: Not Supported 00:29:57.189 Error Log Page Entries Supported: 128 00:29:57.189 Keep Alive: Not Supported 00:29:57.189 00:29:57.189 NVM Command Set Attributes 00:29:57.189 ========================== 00:29:57.189 Submission Queue Entry Size 00:29:57.189 Max: 1 00:29:57.189 Min: 1 00:29:57.189 Completion Queue Entry Size 00:29:57.189 Max: 1 00:29:57.189 Min: 1 00:29:57.189 Number of Namespaces: 0 00:29:57.189 Compare Command: Not Supported 00:29:57.189 Write Uncorrectable Command: Not Supported 00:29:57.189 Dataset Management Command: Not Supported 00:29:57.189 Write Zeroes Command: Not Supported 00:29:57.189 Set Features Save Field: Not Supported 00:29:57.189 Reservations: Not Supported 00:29:57.189 Timestamp: Not Supported 00:29:57.189 Copy: Not Supported 00:29:57.189 Volatile Write Cache: Not Present 00:29:57.189 Atomic Write Unit (Normal): 1 00:29:57.189 Atomic Write Unit (PFail): 1 00:29:57.189 Atomic Compare & Write Unit: 1 00:29:57.189 Fused Compare & Write: Supported 00:29:57.189 Scatter-Gather List 00:29:57.189 SGL Command Set: Supported 00:29:57.189 SGL Keyed: Supported 00:29:57.189 SGL Bit Bucket Descriptor: Not Supported 00:29:57.189 SGL Metadata Pointer: Not Supported 00:29:57.189 Oversized SGL: Not Supported 00:29:57.189 SGL Metadata Address: Not Supported 00:29:57.189 SGL Offset: Supported 00:29:57.189 Transport SGL Data Block: Not Supported 00:29:57.189 Replay Protected Memory Block: Not Supported 00:29:57.189 00:29:57.189 Firmware Slot Information 00:29:57.189 ========================= 00:29:57.189 Active slot: 0 00:29:57.189 00:29:57.189 00:29:57.189 Error Log 00:29:57.189 ========= 00:29:57.189 00:29:57.189 Active Namespaces 00:29:57.189 ================= 00:29:57.189 Discovery Log Page 00:29:57.189 ================== 00:29:57.189 Generation Counter: 2 00:29:57.189 Number of Records: 2 00:29:57.189 Record Format: 0 00:29:57.189 00:29:57.189 Discovery Log Entry 0 00:29:57.190 ---------------------- 00:29:57.190 Transport Type: 3 (TCP) 00:29:57.190 Address Family: 1 (IPv4) 00:29:57.190 Subsystem Type: 3 (Current Discovery Subsystem) 00:29:57.190 Entry Flags: 00:29:57.190 Duplicate Returned Information: 1 00:29:57.190 Explicit Persistent Connection Support for Discovery: 1 00:29:57.190 Transport Requirements: 00:29:57.190 Secure Channel: Not Required 00:29:57.190 Port ID: 0 (0x0000) 00:29:57.190 Controller ID: 65535 (0xffff) 00:29:57.190 Admin Max SQ Size: 128 00:29:57.190 Transport Service Identifier: 4420 00:29:57.190 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:29:57.190 Transport Address: 10.0.0.2 00:29:57.190 Discovery Log Entry 1 00:29:57.190 ---------------------- 00:29:57.190 Transport Type: 3 (TCP) 00:29:57.190 Address Family: 1 (IPv4) 00:29:57.190 Subsystem Type: 2 (NVM Subsystem) 00:29:57.190 Entry Flags: 00:29:57.190 Duplicate Returned Information: 0 00:29:57.190 Explicit Persistent Connection Support for Discovery: 0 00:29:57.190 Transport Requirements: 00:29:57.190 Secure Channel: Not Required 00:29:57.190 Port ID: 0 (0x0000) 00:29:57.190 Controller ID: 65535 (0xffff) 00:29:57.190 Admin Max SQ Size: 128 00:29:57.190 Transport Service Identifier: 4420 00:29:57.190 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:29:57.190 Transport Address: 10.0.0.2 [2024-05-14 04:27:11.622534] nvme_ctrlr.c:4206:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Prepare to destruct SSD 00:29:57.190 [2024-05-14 04:27:11.622549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.190 [2024-05-14 04:27:11.622557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.190 [2024-05-14 04:27:11.622564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.190 [2024-05-14 04:27:11.622571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.190 [2024-05-14 04:27:11.622583] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:57.190 [2024-05-14 04:27:11.622589] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:57.190 [2024-05-14 04:27:11.622596] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x613000001fc0) 00:29:57.190 [2024-05-14 04:27:11.622607] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.190 [2024-05-14 04:27:11.622623] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b520, cid 3, qid 0 00:29:57.190 [2024-05-14 04:27:11.622709] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:57.190 [2024-05-14 04:27:11.622716] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:57.190 [2024-05-14 04:27:11.622721] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:57.190 [2024-05-14 04:27:11.622726] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b520) on tqpair=0x613000001fc0 00:29:57.190 [2024-05-14 04:27:11.622735] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:57.190 [2024-05-14 04:27:11.622740] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:57.190 [2024-05-14 04:27:11.622747] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x613000001fc0) 00:29:57.190 [2024-05-14 04:27:11.622757] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.190 [2024-05-14 04:27:11.622769] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b520, cid 3, qid 0 00:29:57.190 [2024-05-14 04:27:11.622863] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:57.190 [2024-05-14 04:27:11.622870] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:57.190 [2024-05-14 04:27:11.622873] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:57.190 [2024-05-14 04:27:11.622878] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b520) on tqpair=0x613000001fc0 00:29:57.190 [2024-05-14 04:27:11.622884] nvme_ctrlr.c:1069:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] RTD3E = 0 us 00:29:57.190 [2024-05-14 04:27:11.622890] nvme_ctrlr.c:1072:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown timeout = 10000 ms 00:29:57.190 [2024-05-14 04:27:11.622899] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:57.190 [2024-05-14 04:27:11.622904] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:57.190 [2024-05-14 04:27:11.622909] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x613000001fc0) 00:29:57.190 [2024-05-14 04:27:11.622917] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.190 [2024-05-14 04:27:11.622930] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b520, cid 3, qid 0 00:29:57.190 [2024-05-14 04:27:11.623004] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:57.190 [2024-05-14 04:27:11.623010] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:57.190 [2024-05-14 04:27:11.623014] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:57.190 [2024-05-14 04:27:11.623018] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b520) on tqpair=0x613000001fc0 00:29:57.190 [2024-05-14 04:27:11.623027] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:57.190 [2024-05-14 04:27:11.623031] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:57.190 [2024-05-14 04:27:11.623036] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x613000001fc0) 00:29:57.190 [2024-05-14 04:27:11.623043] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.190 [2024-05-14 04:27:11.623052] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b520, cid 3, qid 0 00:29:57.190 [2024-05-14 04:27:11.623135] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:57.190 [2024-05-14 04:27:11.623141] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:57.190 [2024-05-14 04:27:11.623146] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:57.190 [2024-05-14 04:27:11.623150] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b520) on tqpair=0x613000001fc0 00:29:57.190 [2024-05-14 04:27:11.623160] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:57.190 [2024-05-14 04:27:11.623164] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:57.190 [2024-05-14 04:27:11.623168] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x613000001fc0) 00:29:57.190 [2024-05-14 04:27:11.623176] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.190 [2024-05-14 04:27:11.623190] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b520, cid 3, qid 0 00:29:57.190 [2024-05-14 04:27:11.623260] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:57.190 [2024-05-14 04:27:11.623266] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:57.190 [2024-05-14 04:27:11.623269] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:57.190 [2024-05-14 04:27:11.623273] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b520) on tqpair=0x613000001fc0 00:29:57.190 [2024-05-14 04:27:11.623282] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:57.190 [2024-05-14 04:27:11.623286] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:57.190 [2024-05-14 04:27:11.623294] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x613000001fc0) 00:29:57.190 [2024-05-14 04:27:11.623302] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.190 [2024-05-14 04:27:11.623312] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b520, cid 3, qid 0 00:29:57.190 [2024-05-14 04:27:11.623392] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:57.190 [2024-05-14 04:27:11.623399] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:57.190 [2024-05-14 04:27:11.623402] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:57.190 [2024-05-14 04:27:11.623407] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b520) on tqpair=0x613000001fc0 00:29:57.190 [2024-05-14 04:27:11.623415] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:57.190 [2024-05-14 04:27:11.623419] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:57.190 [2024-05-14 04:27:11.623423] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x613000001fc0) 00:29:57.190 [2024-05-14 04:27:11.623430] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.190 [2024-05-14 04:27:11.623439] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b520, cid 3, qid 0 00:29:57.190 [2024-05-14 04:27:11.623526] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:57.190 [2024-05-14 04:27:11.623532] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:57.190 [2024-05-14 04:27:11.623536] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:57.190 [2024-05-14 04:27:11.623540] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b520) on tqpair=0x613000001fc0 00:29:57.190 [2024-05-14 04:27:11.623549] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:57.190 [2024-05-14 04:27:11.623553] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:57.190 [2024-05-14 04:27:11.623557] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x613000001fc0) 00:29:57.190 [2024-05-14 04:27:11.623565] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.190 [2024-05-14 04:27:11.623574] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b520, cid 3, qid 0 00:29:57.190 [2024-05-14 04:27:11.623678] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:57.190 [2024-05-14 04:27:11.623684] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:57.190 [2024-05-14 04:27:11.623691] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:57.190 [2024-05-14 04:27:11.623695] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b520) on tqpair=0x613000001fc0 00:29:57.190 [2024-05-14 04:27:11.623705] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:57.190 [2024-05-14 04:27:11.623708] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:57.190 [2024-05-14 04:27:11.623713] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x613000001fc0) 00:29:57.190 [2024-05-14 04:27:11.623722] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.190 [2024-05-14 04:27:11.623732] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b520, cid 3, qid 0 00:29:57.190 [2024-05-14 04:27:11.623827] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:57.190 [2024-05-14 04:27:11.623833] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:57.190 [2024-05-14 04:27:11.623837] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:57.190 [2024-05-14 04:27:11.623841] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b520) on tqpair=0x613000001fc0 00:29:57.190 [2024-05-14 04:27:11.623851] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:57.190 [2024-05-14 04:27:11.623855] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:57.190 [2024-05-14 04:27:11.623859] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x613000001fc0) 00:29:57.190 [2024-05-14 04:27:11.623867] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.190 [2024-05-14 04:27:11.623876] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b520, cid 3, qid 0 00:29:57.190 [2024-05-14 04:27:11.623955] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:57.190 [2024-05-14 04:27:11.623961] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:57.190 [2024-05-14 04:27:11.623964] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:57.190 [2024-05-14 04:27:11.623969] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b520) on tqpair=0x613000001fc0 00:29:57.190 [2024-05-14 04:27:11.623978] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:57.190 [2024-05-14 04:27:11.623982] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:57.190 [2024-05-14 04:27:11.623986] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x613000001fc0) 00:29:57.190 [2024-05-14 04:27:11.623994] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.190 [2024-05-14 04:27:11.624003] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b520, cid 3, qid 0 00:29:57.190 [2024-05-14 04:27:11.624080] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:57.190 [2024-05-14 04:27:11.624086] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:57.190 [2024-05-14 04:27:11.624090] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:57.190 [2024-05-14 04:27:11.624094] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b520) on tqpair=0x613000001fc0 00:29:57.190 [2024-05-14 04:27:11.624103] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:57.190 [2024-05-14 04:27:11.624107] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:57.190 [2024-05-14 04:27:11.624111] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x613000001fc0) 00:29:57.190 [2024-05-14 04:27:11.624119] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.190 [2024-05-14 04:27:11.624128] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b520, cid 3, qid 0 00:29:57.191 [2024-05-14 04:27:11.624211] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:57.191 [2024-05-14 04:27:11.624217] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:57.191 [2024-05-14 04:27:11.624222] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:57.191 [2024-05-14 04:27:11.624226] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b520) on tqpair=0x613000001fc0 00:29:57.191 [2024-05-14 04:27:11.624236] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:57.191 [2024-05-14 04:27:11.624240] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:57.191 [2024-05-14 04:27:11.624244] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x613000001fc0) 00:29:57.191 [2024-05-14 04:27:11.624251] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.191 [2024-05-14 04:27:11.624261] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b520, cid 3, qid 0 00:29:57.191 [2024-05-14 04:27:11.624332] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:57.191 [2024-05-14 04:27:11.624338] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:57.191 [2024-05-14 04:27:11.624342] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:57.191 [2024-05-14 04:27:11.624346] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b520) on tqpair=0x613000001fc0 00:29:57.191 [2024-05-14 04:27:11.624355] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:57.191 [2024-05-14 04:27:11.624359] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:57.191 [2024-05-14 04:27:11.624363] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x613000001fc0) 00:29:57.191 [2024-05-14 04:27:11.624371] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.191 [2024-05-14 04:27:11.624381] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b520, cid 3, qid 0 00:29:57.191 [2024-05-14 04:27:11.624469] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:57.191 [2024-05-14 04:27:11.624476] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:57.191 [2024-05-14 04:27:11.624480] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:57.191 [2024-05-14 04:27:11.624484] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b520) on tqpair=0x613000001fc0 00:29:57.191 [2024-05-14 04:27:11.624493] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:57.191 [2024-05-14 04:27:11.624497] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:57.191 [2024-05-14 04:27:11.624502] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x613000001fc0) 00:29:57.191 [2024-05-14 04:27:11.624509] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.191 [2024-05-14 04:27:11.624519] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b520, cid 3, qid 0 00:29:57.191 [2024-05-14 04:27:11.624604] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:57.191 [2024-05-14 04:27:11.624611] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:57.191 [2024-05-14 04:27:11.624614] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:57.191 [2024-05-14 04:27:11.624618] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b520) on tqpair=0x613000001fc0 00:29:57.191 [2024-05-14 04:27:11.624627] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:57.191 [2024-05-14 04:27:11.624631] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:57.191 [2024-05-14 04:27:11.624635] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x613000001fc0) 00:29:57.191 [2024-05-14 04:27:11.624643] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.191 [2024-05-14 04:27:11.624652] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b520, cid 3, qid 0 00:29:57.191 [2024-05-14 04:27:11.624734] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:57.191 [2024-05-14 04:27:11.624740] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:57.191 [2024-05-14 04:27:11.624745] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:57.191 [2024-05-14 04:27:11.624749] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b520) on tqpair=0x613000001fc0 00:29:57.191 [2024-05-14 04:27:11.624758] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:57.191 [2024-05-14 04:27:11.624762] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:57.191 [2024-05-14 04:27:11.624766] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x613000001fc0) 00:29:57.191 [2024-05-14 04:27:11.624775] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.191 [2024-05-14 04:27:11.624785] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b520, cid 3, qid 0 00:29:57.191 [2024-05-14 04:27:11.624866] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:57.191 [2024-05-14 04:27:11.624872] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:57.191 [2024-05-14 04:27:11.624876] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:57.191 [2024-05-14 04:27:11.624880] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b520) on tqpair=0x613000001fc0 00:29:57.191 [2024-05-14 04:27:11.624889] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:57.191 [2024-05-14 04:27:11.624893] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:57.191 [2024-05-14 04:27:11.624898] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x613000001fc0) 00:29:57.191 [2024-05-14 04:27:11.624905] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.191 [2024-05-14 04:27:11.624914] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b520, cid 3, qid 0 00:29:57.191 [2024-05-14 04:27:11.624996] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:57.191 [2024-05-14 04:27:11.625002] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:57.191 [2024-05-14 04:27:11.625006] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:57.191 [2024-05-14 04:27:11.625010] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b520) on tqpair=0x613000001fc0 00:29:57.191 [2024-05-14 04:27:11.625019] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:57.191 [2024-05-14 04:27:11.625023] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:57.191 [2024-05-14 04:27:11.625028] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x613000001fc0) 00:29:57.191 [2024-05-14 04:27:11.625035] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.191 [2024-05-14 04:27:11.625044] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b520, cid 3, qid 0 00:29:57.191 [2024-05-14 04:27:11.625134] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:57.191 [2024-05-14 04:27:11.625140] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:57.191 [2024-05-14 04:27:11.625144] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:57.191 [2024-05-14 04:27:11.625148] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b520) on tqpair=0x613000001fc0 00:29:57.191 [2024-05-14 04:27:11.625157] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:57.191 [2024-05-14 04:27:11.625160] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:57.191 [2024-05-14 04:27:11.625165] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x613000001fc0) 00:29:57.191 [2024-05-14 04:27:11.625173] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.191 [2024-05-14 04:27:11.629192] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b520, cid 3, qid 0 00:29:57.191 [2024-05-14 04:27:11.629204] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:57.191 [2024-05-14 04:27:11.629211] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:57.191 [2024-05-14 04:27:11.629216] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:57.191 [2024-05-14 04:27:11.629221] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b520) on tqpair=0x613000001fc0 00:29:57.191 [2024-05-14 04:27:11.629231] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:57.191 [2024-05-14 04:27:11.629235] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:57.191 [2024-05-14 04:27:11.629239] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x613000001fc0) 00:29:57.191 [2024-05-14 04:27:11.629247] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.191 [2024-05-14 04:27:11.629258] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b520, cid 3, qid 0 00:29:57.191 [2024-05-14 04:27:11.629352] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:57.191 [2024-05-14 04:27:11.629358] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:57.191 [2024-05-14 04:27:11.629361] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:57.191 [2024-05-14 04:27:11.629366] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b520) on tqpair=0x613000001fc0 00:29:57.191 [2024-05-14 04:27:11.629373] nvme_ctrlr.c:1191:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown complete in 6 milliseconds 00:29:57.191 00:29:57.191 04:27:11 -- host/identify.sh@45 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:29:57.191 [2024-05-14 04:27:11.695035] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:29:57.191 [2024-05-14 04:27:11.695117] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4180683 ] 00:29:57.191 EAL: No free 2048 kB hugepages reported on node 1 00:29:57.191 [2024-05-14 04:27:11.742983] nvme_ctrlr.c:1477:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to connect adminq (no timeout) 00:29:57.191 [2024-05-14 04:27:11.743061] nvme_tcp.c:2244:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:29:57.191 [2024-05-14 04:27:11.743070] nvme_tcp.c:2248:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:29:57.191 [2024-05-14 04:27:11.743089] nvme_tcp.c:2266:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:29:57.191 [2024-05-14 04:27:11.743100] sock.c: 334:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:29:57.191 [2024-05-14 04:27:11.743359] nvme_ctrlr.c:1477:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for connect adminq (no timeout) 00:29:57.191 [2024-05-14 04:27:11.743390] nvme_tcp.c:1487:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x613000001fc0 0 00:29:57.191 [2024-05-14 04:27:11.754194] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:29:57.191 [2024-05-14 04:27:11.754211] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:29:57.191 [2024-05-14 04:27:11.754218] nvme_tcp.c:1533:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:29:57.191 [2024-05-14 04:27:11.754223] nvme_tcp.c:1534:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:29:57.191 [2024-05-14 04:27:11.754260] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:57.191 [2024-05-14 04:27:11.754268] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:57.191 [2024-05-14 04:27:11.754275] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x613000001fc0) 00:29:57.191 [2024-05-14 04:27:11.754294] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:29:57.191 [2024-05-14 04:27:11.754318] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:29:57.191 [2024-05-14 04:27:11.762198] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:57.191 [2024-05-14 04:27:11.762213] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:57.191 [2024-05-14 04:27:11.762219] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:57.191 [2024-05-14 04:27:11.762226] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x613000001fc0 00:29:57.191 [2024-05-14 04:27:11.762237] nvme_fabric.c: 620:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:29:57.191 [2024-05-14 04:27:11.762249] nvme_ctrlr.c:1477:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs (no timeout) 00:29:57.191 [2024-05-14 04:27:11.762256] nvme_ctrlr.c:1477:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs wait for vs (no timeout) 00:29:57.191 [2024-05-14 04:27:11.762273] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:57.191 [2024-05-14 04:27:11.762280] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:57.191 [2024-05-14 04:27:11.762287] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x613000001fc0) 00:29:57.191 [2024-05-14 04:27:11.762301] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.191 [2024-05-14 04:27:11.762319] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:29:57.191 [2024-05-14 04:27:11.762425] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:57.191 [2024-05-14 04:27:11.762433] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:57.191 [2024-05-14 04:27:11.762442] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:57.191 [2024-05-14 04:27:11.762447] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x613000001fc0 00:29:57.192 [2024-05-14 04:27:11.762454] nvme_ctrlr.c:1477:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap (no timeout) 00:29:57.192 [2024-05-14 04:27:11.762463] nvme_ctrlr.c:1477:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap wait for cap (no timeout) 00:29:57.192 [2024-05-14 04:27:11.762472] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:57.192 [2024-05-14 04:27:11.762477] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:57.192 [2024-05-14 04:27:11.762483] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x613000001fc0) 00:29:57.192 [2024-05-14 04:27:11.762495] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.192 [2024-05-14 04:27:11.762507] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:29:57.192 [2024-05-14 04:27:11.762585] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:57.192 [2024-05-14 04:27:11.762592] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:57.192 [2024-05-14 04:27:11.762596] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:57.192 [2024-05-14 04:27:11.762601] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x613000001fc0 00:29:57.192 [2024-05-14 04:27:11.762608] nvme_ctrlr.c:1477:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en (no timeout) 00:29:57.192 [2024-05-14 04:27:11.762616] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en wait for cc (timeout 15000 ms) 00:29:57.192 [2024-05-14 04:27:11.762623] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:57.192 [2024-05-14 04:27:11.762628] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:57.192 [2024-05-14 04:27:11.762633] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x613000001fc0) 00:29:57.192 [2024-05-14 04:27:11.762643] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.192 [2024-05-14 04:27:11.762654] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:29:57.192 [2024-05-14 04:27:11.762738] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:57.192 [2024-05-14 04:27:11.762744] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:57.192 [2024-05-14 04:27:11.762748] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:57.192 [2024-05-14 04:27:11.762753] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x613000001fc0 00:29:57.192 [2024-05-14 04:27:11.762759] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:29:57.192 [2024-05-14 04:27:11.762769] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:57.192 [2024-05-14 04:27:11.762774] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:57.192 [2024-05-14 04:27:11.762779] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x613000001fc0) 00:29:57.192 [2024-05-14 04:27:11.762788] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.192 [2024-05-14 04:27:11.762798] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:29:57.192 [2024-05-14 04:27:11.762883] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:57.192 [2024-05-14 04:27:11.762892] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:57.192 [2024-05-14 04:27:11.762896] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:57.192 [2024-05-14 04:27:11.762900] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x613000001fc0 00:29:57.192 [2024-05-14 04:27:11.762906] nvme_ctrlr.c:3736:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 0 && CSTS.RDY = 0 00:29:57.192 [2024-05-14 04:27:11.762913] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to controller is disabled (timeout 15000 ms) 00:29:57.192 [2024-05-14 04:27:11.762922] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:29:57.192 [2024-05-14 04:27:11.763028] nvme_ctrlr.c:3929:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Setting CC.EN = 1 00:29:57.192 [2024-05-14 04:27:11.763033] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:29:57.192 [2024-05-14 04:27:11.763045] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:57.192 [2024-05-14 04:27:11.763049] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:57.192 [2024-05-14 04:27:11.763055] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x613000001fc0) 00:29:57.192 [2024-05-14 04:27:11.763064] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.192 [2024-05-14 04:27:11.763075] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:29:57.192 [2024-05-14 04:27:11.763169] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:57.192 [2024-05-14 04:27:11.763175] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:57.192 [2024-05-14 04:27:11.763180] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:57.192 [2024-05-14 04:27:11.763187] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x613000001fc0 00:29:57.192 [2024-05-14 04:27:11.763193] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:29:57.192 [2024-05-14 04:27:11.763203] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:57.192 [2024-05-14 04:27:11.763209] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:57.192 [2024-05-14 04:27:11.763214] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x613000001fc0) 00:29:57.192 [2024-05-14 04:27:11.763223] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.192 [2024-05-14 04:27:11.763235] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:29:57.192 [2024-05-14 04:27:11.763316] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:57.192 [2024-05-14 04:27:11.763323] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:57.192 [2024-05-14 04:27:11.763327] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:57.192 [2024-05-14 04:27:11.763331] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x613000001fc0 00:29:57.192 [2024-05-14 04:27:11.763337] nvme_ctrlr.c:3771:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:29:57.192 [2024-05-14 04:27:11.763343] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to reset admin queue (timeout 30000 ms) 00:29:57.192 [2024-05-14 04:27:11.763351] nvme_ctrlr.c:1477:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller (no timeout) 00:29:57.192 [2024-05-14 04:27:11.763361] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify controller (timeout 30000 ms) 00:29:57.192 [2024-05-14 04:27:11.763373] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:57.192 [2024-05-14 04:27:11.763377] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:57.192 [2024-05-14 04:27:11.763383] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x613000001fc0) 00:29:57.192 [2024-05-14 04:27:11.763393] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.192 [2024-05-14 04:27:11.763404] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:29:57.192 [2024-05-14 04:27:11.763519] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:29:57.192 [2024-05-14 04:27:11.763525] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:29:57.192 [2024-05-14 04:27:11.763529] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:29:57.192 [2024-05-14 04:27:11.763535] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x613000001fc0): datao=0, datal=4096, cccid=0 00:29:57.192 [2024-05-14 04:27:11.763542] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b100) on tqpair(0x613000001fc0): expected_datao=0, payload_size=4096 00:29:57.192 [2024-05-14 04:27:11.763556] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:29:57.192 [2024-05-14 04:27:11.763563] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:29:57.454 [2024-05-14 04:27:11.805398] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:57.454 [2024-05-14 04:27:11.805412] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:57.454 [2024-05-14 04:27:11.805417] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:57.454 [2024-05-14 04:27:11.805423] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x613000001fc0 00:29:57.454 [2024-05-14 04:27:11.805438] nvme_ctrlr.c:1971:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_xfer_size 4294967295 00:29:57.454 [2024-05-14 04:27:11.805445] nvme_ctrlr.c:1975:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] MDTS max_xfer_size 131072 00:29:57.454 [2024-05-14 04:27:11.805451] nvme_ctrlr.c:1978:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CNTLID 0x0001 00:29:57.454 [2024-05-14 04:27:11.805457] nvme_ctrlr.c:2002:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_sges 16 00:29:57.454 [2024-05-14 04:27:11.805468] nvme_ctrlr.c:2017:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] fuses compare and write: 1 00:29:57.454 [2024-05-14 04:27:11.805475] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to configure AER (timeout 30000 ms) 00:29:57.454 [2024-05-14 04:27:11.805484] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for configure aer (timeout 30000 ms) 00:29:57.454 [2024-05-14 04:27:11.805498] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:57.454 [2024-05-14 04:27:11.805503] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:57.454 [2024-05-14 04:27:11.805509] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x613000001fc0) 00:29:57.454 [2024-05-14 04:27:11.805522] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:29:57.454 [2024-05-14 04:27:11.805537] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:29:57.454 [2024-05-14 04:27:11.805625] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:57.454 [2024-05-14 04:27:11.805631] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:57.454 [2024-05-14 04:27:11.805635] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:57.454 [2024-05-14 04:27:11.805640] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x613000001fc0 00:29:57.454 [2024-05-14 04:27:11.805648] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:57.454 [2024-05-14 04:27:11.805653] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:57.454 [2024-05-14 04:27:11.805660] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x613000001fc0) 00:29:57.454 [2024-05-14 04:27:11.805669] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:29:57.454 [2024-05-14 04:27:11.805676] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:57.454 [2024-05-14 04:27:11.805680] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:57.454 [2024-05-14 04:27:11.805684] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x613000001fc0) 00:29:57.454 [2024-05-14 04:27:11.805691] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:57.454 [2024-05-14 04:27:11.805697] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:57.454 [2024-05-14 04:27:11.805701] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:57.454 [2024-05-14 04:27:11.805706] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x613000001fc0) 00:29:57.454 [2024-05-14 04:27:11.805713] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:57.454 [2024-05-14 04:27:11.805719] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:57.454 [2024-05-14 04:27:11.805723] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:57.454 [2024-05-14 04:27:11.805727] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x613000001fc0) 00:29:57.454 [2024-05-14 04:27:11.805734] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:57.454 [2024-05-14 04:27:11.805739] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set keep alive timeout (timeout 30000 ms) 00:29:57.454 [2024-05-14 04:27:11.805748] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:29:57.454 [2024-05-14 04:27:11.805756] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:57.454 [2024-05-14 04:27:11.805760] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:57.454 [2024-05-14 04:27:11.805765] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x613000001fc0) 00:29:57.454 [2024-05-14 04:27:11.805774] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.454 [2024-05-14 04:27:11.805787] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:29:57.454 [2024-05-14 04:27:11.805792] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b260, cid 1, qid 0 00:29:57.454 [2024-05-14 04:27:11.805798] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b3c0, cid 2, qid 0 00:29:57.454 [2024-05-14 04:27:11.805803] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b520, cid 3, qid 0 00:29:57.454 [2024-05-14 04:27:11.805809] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b680, cid 4, qid 0 00:29:57.454 [2024-05-14 04:27:11.805921] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:57.454 [2024-05-14 04:27:11.805928] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:57.454 [2024-05-14 04:27:11.805931] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:57.454 [2024-05-14 04:27:11.805936] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b680) on tqpair=0x613000001fc0 00:29:57.454 [2024-05-14 04:27:11.805942] nvme_ctrlr.c:2889:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Sending keep alive every 5000000 us 00:29:57.454 [2024-05-14 04:27:11.805950] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller iocs specific (timeout 30000 ms) 00:29:57.454 [2024-05-14 04:27:11.805960] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set number of queues (timeout 30000 ms) 00:29:57.454 [2024-05-14 04:27:11.805973] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set number of queues (timeout 30000 ms) 00:29:57.454 [2024-05-14 04:27:11.805981] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:57.454 [2024-05-14 04:27:11.805985] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:57.454 [2024-05-14 04:27:11.805991] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x613000001fc0) 00:29:57.454 [2024-05-14 04:27:11.805999] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:29:57.454 [2024-05-14 04:27:11.806010] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b680, cid 4, qid 0 00:29:57.454 [2024-05-14 04:27:11.806086] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:57.454 [2024-05-14 04:27:11.806093] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:57.454 [2024-05-14 04:27:11.806097] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:57.455 [2024-05-14 04:27:11.806101] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b680) on tqpair=0x613000001fc0 00:29:57.455 [2024-05-14 04:27:11.806148] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify active ns (timeout 30000 ms) 00:29:57.455 [2024-05-14 04:27:11.806160] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify active ns (timeout 30000 ms) 00:29:57.455 [2024-05-14 04:27:11.806170] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:57.455 [2024-05-14 04:27:11.806175] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:57.455 [2024-05-14 04:27:11.806181] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x613000001fc0) 00:29:57.455 [2024-05-14 04:27:11.810199] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.455 [2024-05-14 04:27:11.810212] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b680, cid 4, qid 0 00:29:57.455 [2024-05-14 04:27:11.810319] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:29:57.455 [2024-05-14 04:27:11.810326] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:29:57.455 [2024-05-14 04:27:11.810331] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:29:57.455 [2024-05-14 04:27:11.810336] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x613000001fc0): datao=0, datal=4096, cccid=4 00:29:57.455 [2024-05-14 04:27:11.810342] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b680) on tqpair(0x613000001fc0): expected_datao=0, payload_size=4096 00:29:57.455 [2024-05-14 04:27:11.810358] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:29:57.455 [2024-05-14 04:27:11.810363] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:29:57.455 [2024-05-14 04:27:11.852403] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:57.455 [2024-05-14 04:27:11.852417] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:57.455 [2024-05-14 04:27:11.852422] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:57.455 [2024-05-14 04:27:11.852427] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b680) on tqpair=0x613000001fc0 00:29:57.455 [2024-05-14 04:27:11.852451] nvme_ctrlr.c:4542:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Namespace 1 was added 00:29:57.455 [2024-05-14 04:27:11.852466] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns (timeout 30000 ms) 00:29:57.455 [2024-05-14 04:27:11.852476] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify ns (timeout 30000 ms) 00:29:57.455 [2024-05-14 04:27:11.852486] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:57.455 [2024-05-14 04:27:11.852492] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:57.455 [2024-05-14 04:27:11.852497] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x613000001fc0) 00:29:57.455 [2024-05-14 04:27:11.852509] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.455 [2024-05-14 04:27:11.852531] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b680, cid 4, qid 0 00:29:57.455 [2024-05-14 04:27:11.852632] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:29:57.455 [2024-05-14 04:27:11.852641] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:29:57.455 [2024-05-14 04:27:11.852645] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:29:57.455 [2024-05-14 04:27:11.852650] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x613000001fc0): datao=0, datal=4096, cccid=4 00:29:57.455 [2024-05-14 04:27:11.852656] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b680) on tqpair(0x613000001fc0): expected_datao=0, payload_size=4096 00:29:57.455 [2024-05-14 04:27:11.852668] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:29:57.455 [2024-05-14 04:27:11.852672] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:29:57.455 [2024-05-14 04:27:11.898193] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:57.455 [2024-05-14 04:27:11.898207] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:57.455 [2024-05-14 04:27:11.898211] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:57.455 [2024-05-14 04:27:11.898216] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b680) on tqpair=0x613000001fc0 00:29:57.455 [2024-05-14 04:27:11.898236] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:29:57.455 [2024-05-14 04:27:11.898247] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:29:57.455 [2024-05-14 04:27:11.898259] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:57.455 [2024-05-14 04:27:11.898264] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:57.455 [2024-05-14 04:27:11.898270] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x613000001fc0) 00:29:57.455 [2024-05-14 04:27:11.898281] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.455 [2024-05-14 04:27:11.898297] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b680, cid 4, qid 0 00:29:57.455 [2024-05-14 04:27:11.898402] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:29:57.455 [2024-05-14 04:27:11.898408] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:29:57.455 [2024-05-14 04:27:11.898415] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:29:57.455 [2024-05-14 04:27:11.898420] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x613000001fc0): datao=0, datal=4096, cccid=4 00:29:57.455 [2024-05-14 04:27:11.898426] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b680) on tqpair(0x613000001fc0): expected_datao=0, payload_size=4096 00:29:57.455 [2024-05-14 04:27:11.898437] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:29:57.455 [2024-05-14 04:27:11.898442] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:29:57.455 [2024-05-14 04:27:11.940390] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:57.455 [2024-05-14 04:27:11.940404] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:57.455 [2024-05-14 04:27:11.940408] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:57.455 [2024-05-14 04:27:11.940414] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b680) on tqpair=0x613000001fc0 00:29:57.455 [2024-05-14 04:27:11.940429] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns iocs specific (timeout 30000 ms) 00:29:57.455 [2024-05-14 04:27:11.940438] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported log pages (timeout 30000 ms) 00:29:57.455 [2024-05-14 04:27:11.940448] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported features (timeout 30000 ms) 00:29:57.455 [2024-05-14 04:27:11.940456] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set doorbell buffer config (timeout 30000 ms) 00:29:57.455 [2024-05-14 04:27:11.940464] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host ID (timeout 30000 ms) 00:29:57.455 [2024-05-14 04:27:11.940471] nvme_ctrlr.c:2977:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] NVMe-oF transport - not sending Set Features - Host ID 00:29:57.455 [2024-05-14 04:27:11.940477] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to transport ready (timeout 30000 ms) 00:29:57.455 [2024-05-14 04:27:11.940483] nvme_ctrlr.c:1477:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to ready (no timeout) 00:29:57.455 [2024-05-14 04:27:11.940507] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:57.455 [2024-05-14 04:27:11.940513] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:57.455 [2024-05-14 04:27:11.940518] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x613000001fc0) 00:29:57.455 [2024-05-14 04:27:11.940533] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.455 [2024-05-14 04:27:11.940542] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:57.455 [2024-05-14 04:27:11.940546] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:57.455 [2024-05-14 04:27:11.940551] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x613000001fc0) 00:29:57.455 [2024-05-14 04:27:11.940562] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:29:57.455 [2024-05-14 04:27:11.940577] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b680, cid 4, qid 0 00:29:57.455 [2024-05-14 04:27:11.940583] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b7e0, cid 5, qid 0 00:29:57.455 [2024-05-14 04:27:11.940688] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:57.455 [2024-05-14 04:27:11.940696] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:57.455 [2024-05-14 04:27:11.940700] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:57.455 [2024-05-14 04:27:11.940706] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b680) on tqpair=0x613000001fc0 00:29:57.455 [2024-05-14 04:27:11.940714] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:57.455 [2024-05-14 04:27:11.940723] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:57.455 [2024-05-14 04:27:11.940729] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:57.455 [2024-05-14 04:27:11.940733] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b7e0) on tqpair=0x613000001fc0 00:29:57.455 [2024-05-14 04:27:11.940742] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:57.455 [2024-05-14 04:27:11.940745] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:57.455 [2024-05-14 04:27:11.940750] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x613000001fc0) 00:29:57.455 [2024-05-14 04:27:11.940758] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.455 [2024-05-14 04:27:11.940767] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b7e0, cid 5, qid 0 00:29:57.455 [2024-05-14 04:27:11.940847] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:57.455 [2024-05-14 04:27:11.940853] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:57.455 [2024-05-14 04:27:11.940856] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:57.455 [2024-05-14 04:27:11.940861] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b7e0) on tqpair=0x613000001fc0 00:29:57.455 [2024-05-14 04:27:11.940870] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:57.455 [2024-05-14 04:27:11.940874] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:57.455 [2024-05-14 04:27:11.940878] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x613000001fc0) 00:29:57.455 [2024-05-14 04:27:11.940886] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.455 [2024-05-14 04:27:11.940896] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b7e0, cid 5, qid 0 00:29:57.455 [2024-05-14 04:27:11.940973] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:57.455 [2024-05-14 04:27:11.940979] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:57.455 [2024-05-14 04:27:11.940984] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:57.455 [2024-05-14 04:27:11.940988] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b7e0) on tqpair=0x613000001fc0 00:29:57.455 [2024-05-14 04:27:11.940997] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:57.455 [2024-05-14 04:27:11.941000] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:57.455 [2024-05-14 04:27:11.941005] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x613000001fc0) 00:29:57.456 [2024-05-14 04:27:11.941012] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.456 [2024-05-14 04:27:11.941022] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b7e0, cid 5, qid 0 00:29:57.456 [2024-05-14 04:27:11.941098] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:57.456 [2024-05-14 04:27:11.941104] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:57.456 [2024-05-14 04:27:11.941108] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:57.456 [2024-05-14 04:27:11.941113] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b7e0) on tqpair=0x613000001fc0 00:29:57.456 [2024-05-14 04:27:11.941129] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:57.456 [2024-05-14 04:27:11.941134] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:57.456 [2024-05-14 04:27:11.941139] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x613000001fc0) 00:29:57.456 [2024-05-14 04:27:11.941148] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.456 [2024-05-14 04:27:11.941157] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:57.456 [2024-05-14 04:27:11.941163] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:57.456 [2024-05-14 04:27:11.941169] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x613000001fc0) 00:29:57.456 [2024-05-14 04:27:11.941178] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.456 [2024-05-14 04:27:11.941191] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:57.456 [2024-05-14 04:27:11.941195] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:57.456 [2024-05-14 04:27:11.941205] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0x613000001fc0) 00:29:57.456 [2024-05-14 04:27:11.941214] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.456 [2024-05-14 04:27:11.941223] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:57.456 [2024-05-14 04:27:11.941228] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:57.456 [2024-05-14 04:27:11.941233] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x613000001fc0) 00:29:57.456 [2024-05-14 04:27:11.941243] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.456 [2024-05-14 04:27:11.941256] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b7e0, cid 5, qid 0 00:29:57.456 [2024-05-14 04:27:11.941262] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b680, cid 4, qid 0 00:29:57.456 [2024-05-14 04:27:11.941267] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b940, cid 6, qid 0 00:29:57.456 [2024-05-14 04:27:11.941272] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001baa0, cid 7, qid 0 00:29:57.456 [2024-05-14 04:27:11.941421] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:29:57.456 [2024-05-14 04:27:11.941428] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:29:57.456 [2024-05-14 04:27:11.941433] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:29:57.456 [2024-05-14 04:27:11.941437] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x613000001fc0): datao=0, datal=8192, cccid=5 00:29:57.456 [2024-05-14 04:27:11.941443] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b7e0) on tqpair(0x613000001fc0): expected_datao=0, payload_size=8192 00:29:57.456 [2024-05-14 04:27:11.941472] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:29:57.456 [2024-05-14 04:27:11.941477] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:29:57.456 [2024-05-14 04:27:11.941484] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:29:57.456 [2024-05-14 04:27:11.941491] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:29:57.456 [2024-05-14 04:27:11.941494] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:29:57.456 [2024-05-14 04:27:11.941499] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x613000001fc0): datao=0, datal=512, cccid=4 00:29:57.456 [2024-05-14 04:27:11.941504] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b680) on tqpair(0x613000001fc0): expected_datao=0, payload_size=512 00:29:57.456 [2024-05-14 04:27:11.941512] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:29:57.456 [2024-05-14 04:27:11.941515] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:29:57.456 [2024-05-14 04:27:11.941524] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:29:57.456 [2024-05-14 04:27:11.941530] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:29:57.456 [2024-05-14 04:27:11.941534] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:29:57.456 [2024-05-14 04:27:11.941538] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x613000001fc0): datao=0, datal=512, cccid=6 00:29:57.456 [2024-05-14 04:27:11.941543] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b940) on tqpair(0x613000001fc0): expected_datao=0, payload_size=512 00:29:57.456 [2024-05-14 04:27:11.941551] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:29:57.456 [2024-05-14 04:27:11.941555] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:29:57.456 [2024-05-14 04:27:11.941562] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:29:57.456 [2024-05-14 04:27:11.941568] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:29:57.456 [2024-05-14 04:27:11.941571] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:29:57.456 [2024-05-14 04:27:11.941576] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x613000001fc0): datao=0, datal=4096, cccid=7 00:29:57.456 [2024-05-14 04:27:11.941581] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001baa0) on tqpair(0x613000001fc0): expected_datao=0, payload_size=4096 00:29:57.456 [2024-05-14 04:27:11.941590] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:29:57.456 [2024-05-14 04:27:11.941593] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:29:57.456 [2024-05-14 04:27:11.941601] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:57.456 [2024-05-14 04:27:11.941607] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:57.456 [2024-05-14 04:27:11.941611] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:57.456 [2024-05-14 04:27:11.941616] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b7e0) on tqpair=0x613000001fc0 00:29:57.456 [2024-05-14 04:27:11.941634] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:57.456 [2024-05-14 04:27:11.941640] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:57.456 [2024-05-14 04:27:11.941644] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:57.456 [2024-05-14 04:27:11.941649] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b680) on tqpair=0x613000001fc0 00:29:57.456 [2024-05-14 04:27:11.941660] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:57.456 [2024-05-14 04:27:11.941666] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:57.456 [2024-05-14 04:27:11.941670] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:57.456 [2024-05-14 04:27:11.941674] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b940) on tqpair=0x613000001fc0 00:29:57.456 [2024-05-14 04:27:11.941684] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:57.456 [2024-05-14 04:27:11.941690] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:57.456 [2024-05-14 04:27:11.941694] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:57.456 [2024-05-14 04:27:11.941698] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001baa0) on tqpair=0x613000001fc0 00:29:57.456 ===================================================== 00:29:57.456 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:57.456 ===================================================== 00:29:57.456 Controller Capabilities/Features 00:29:57.456 ================================ 00:29:57.456 Vendor ID: 8086 00:29:57.456 Subsystem Vendor ID: 8086 00:29:57.456 Serial Number: SPDK00000000000001 00:29:57.456 Model Number: SPDK bdev Controller 00:29:57.456 Firmware Version: 24.01.1 00:29:57.456 Recommended Arb Burst: 6 00:29:57.456 IEEE OUI Identifier: e4 d2 5c 00:29:57.456 Multi-path I/O 00:29:57.456 May have multiple subsystem ports: Yes 00:29:57.456 May have multiple controllers: Yes 00:29:57.456 Associated with SR-IOV VF: No 00:29:57.456 Max Data Transfer Size: 131072 00:29:57.456 Max Number of Namespaces: 32 00:29:57.456 Max Number of I/O Queues: 127 00:29:57.456 NVMe Specification Version (VS): 1.3 00:29:57.456 NVMe Specification Version (Identify): 1.3 00:29:57.456 Maximum Queue Entries: 128 00:29:57.456 Contiguous Queues Required: Yes 00:29:57.456 Arbitration Mechanisms Supported 00:29:57.456 Weighted Round Robin: Not Supported 00:29:57.456 Vendor Specific: Not Supported 00:29:57.456 Reset Timeout: 15000 ms 00:29:57.456 Doorbell Stride: 4 bytes 00:29:57.456 NVM Subsystem Reset: Not Supported 00:29:57.456 Command Sets Supported 00:29:57.456 NVM Command Set: Supported 00:29:57.456 Boot Partition: Not Supported 00:29:57.456 Memory Page Size Minimum: 4096 bytes 00:29:57.456 Memory Page Size Maximum: 4096 bytes 00:29:57.456 Persistent Memory Region: Not Supported 00:29:57.456 Optional Asynchronous Events Supported 00:29:57.456 Namespace Attribute Notices: Supported 00:29:57.456 Firmware Activation Notices: Not Supported 00:29:57.456 ANA Change Notices: Not Supported 00:29:57.456 PLE Aggregate Log Change Notices: Not Supported 00:29:57.456 LBA Status Info Alert Notices: Not Supported 00:29:57.456 EGE Aggregate Log Change Notices: Not Supported 00:29:57.456 Normal NVM Subsystem Shutdown event: Not Supported 00:29:57.456 Zone Descriptor Change Notices: Not Supported 00:29:57.456 Discovery Log Change Notices: Not Supported 00:29:57.456 Controller Attributes 00:29:57.456 128-bit Host Identifier: Supported 00:29:57.456 Non-Operational Permissive Mode: Not Supported 00:29:57.456 NVM Sets: Not Supported 00:29:57.456 Read Recovery Levels: Not Supported 00:29:57.456 Endurance Groups: Not Supported 00:29:57.456 Predictable Latency Mode: Not Supported 00:29:57.456 Traffic Based Keep ALive: Not Supported 00:29:57.456 Namespace Granularity: Not Supported 00:29:57.456 SQ Associations: Not Supported 00:29:57.456 UUID List: Not Supported 00:29:57.456 Multi-Domain Subsystem: Not Supported 00:29:57.456 Fixed Capacity Management: Not Supported 00:29:57.456 Variable Capacity Management: Not Supported 00:29:57.456 Delete Endurance Group: Not Supported 00:29:57.456 Delete NVM Set: Not Supported 00:29:57.456 Extended LBA Formats Supported: Not Supported 00:29:57.457 Flexible Data Placement Supported: Not Supported 00:29:57.457 00:29:57.457 Controller Memory Buffer Support 00:29:57.457 ================================ 00:29:57.457 Supported: No 00:29:57.457 00:29:57.457 Persistent Memory Region Support 00:29:57.457 ================================ 00:29:57.457 Supported: No 00:29:57.457 00:29:57.457 Admin Command Set Attributes 00:29:57.457 ============================ 00:29:57.457 Security Send/Receive: Not Supported 00:29:57.457 Format NVM: Not Supported 00:29:57.457 Firmware Activate/Download: Not Supported 00:29:57.457 Namespace Management: Not Supported 00:29:57.457 Device Self-Test: Not Supported 00:29:57.457 Directives: Not Supported 00:29:57.457 NVMe-MI: Not Supported 00:29:57.457 Virtualization Management: Not Supported 00:29:57.457 Doorbell Buffer Config: Not Supported 00:29:57.457 Get LBA Status Capability: Not Supported 00:29:57.457 Command & Feature Lockdown Capability: Not Supported 00:29:57.457 Abort Command Limit: 4 00:29:57.457 Async Event Request Limit: 4 00:29:57.457 Number of Firmware Slots: N/A 00:29:57.457 Firmware Slot 1 Read-Only: N/A 00:29:57.457 Firmware Activation Without Reset: N/A 00:29:57.457 Multiple Update Detection Support: N/A 00:29:57.457 Firmware Update Granularity: No Information Provided 00:29:57.457 Per-Namespace SMART Log: No 00:29:57.457 Asymmetric Namespace Access Log Page: Not Supported 00:29:57.457 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:29:57.457 Command Effects Log Page: Supported 00:29:57.457 Get Log Page Extended Data: Supported 00:29:57.457 Telemetry Log Pages: Not Supported 00:29:57.457 Persistent Event Log Pages: Not Supported 00:29:57.457 Supported Log Pages Log Page: May Support 00:29:57.457 Commands Supported & Effects Log Page: Not Supported 00:29:57.457 Feature Identifiers & Effects Log Page:May Support 00:29:57.457 NVMe-MI Commands & Effects Log Page: May Support 00:29:57.457 Data Area 4 for Telemetry Log: Not Supported 00:29:57.457 Error Log Page Entries Supported: 128 00:29:57.457 Keep Alive: Supported 00:29:57.457 Keep Alive Granularity: 10000 ms 00:29:57.457 00:29:57.457 NVM Command Set Attributes 00:29:57.457 ========================== 00:29:57.457 Submission Queue Entry Size 00:29:57.457 Max: 64 00:29:57.457 Min: 64 00:29:57.457 Completion Queue Entry Size 00:29:57.457 Max: 16 00:29:57.457 Min: 16 00:29:57.457 Number of Namespaces: 32 00:29:57.457 Compare Command: Supported 00:29:57.457 Write Uncorrectable Command: Not Supported 00:29:57.457 Dataset Management Command: Supported 00:29:57.457 Write Zeroes Command: Supported 00:29:57.457 Set Features Save Field: Not Supported 00:29:57.457 Reservations: Supported 00:29:57.457 Timestamp: Not Supported 00:29:57.457 Copy: Supported 00:29:57.457 Volatile Write Cache: Present 00:29:57.457 Atomic Write Unit (Normal): 1 00:29:57.457 Atomic Write Unit (PFail): 1 00:29:57.457 Atomic Compare & Write Unit: 1 00:29:57.457 Fused Compare & Write: Supported 00:29:57.457 Scatter-Gather List 00:29:57.457 SGL Command Set: Supported 00:29:57.457 SGL Keyed: Supported 00:29:57.457 SGL Bit Bucket Descriptor: Not Supported 00:29:57.457 SGL Metadata Pointer: Not Supported 00:29:57.457 Oversized SGL: Not Supported 00:29:57.457 SGL Metadata Address: Not Supported 00:29:57.457 SGL Offset: Supported 00:29:57.457 Transport SGL Data Block: Not Supported 00:29:57.457 Replay Protected Memory Block: Not Supported 00:29:57.457 00:29:57.457 Firmware Slot Information 00:29:57.457 ========================= 00:29:57.457 Active slot: 1 00:29:57.457 Slot 1 Firmware Revision: 24.01.1 00:29:57.457 00:29:57.457 00:29:57.457 Commands Supported and Effects 00:29:57.457 ============================== 00:29:57.457 Admin Commands 00:29:57.457 -------------- 00:29:57.457 Get Log Page (02h): Supported 00:29:57.457 Identify (06h): Supported 00:29:57.457 Abort (08h): Supported 00:29:57.457 Set Features (09h): Supported 00:29:57.457 Get Features (0Ah): Supported 00:29:57.457 Asynchronous Event Request (0Ch): Supported 00:29:57.457 Keep Alive (18h): Supported 00:29:57.457 I/O Commands 00:29:57.457 ------------ 00:29:57.457 Flush (00h): Supported LBA-Change 00:29:57.457 Write (01h): Supported LBA-Change 00:29:57.457 Read (02h): Supported 00:29:57.457 Compare (05h): Supported 00:29:57.457 Write Zeroes (08h): Supported LBA-Change 00:29:57.457 Dataset Management (09h): Supported LBA-Change 00:29:57.457 Copy (19h): Supported LBA-Change 00:29:57.457 Unknown (79h): Supported LBA-Change 00:29:57.457 Unknown (7Ah): Supported 00:29:57.457 00:29:57.457 Error Log 00:29:57.457 ========= 00:29:57.457 00:29:57.457 Arbitration 00:29:57.457 =========== 00:29:57.457 Arbitration Burst: 1 00:29:57.457 00:29:57.457 Power Management 00:29:57.457 ================ 00:29:57.457 Number of Power States: 1 00:29:57.457 Current Power State: Power State #0 00:29:57.457 Power State #0: 00:29:57.457 Max Power: 0.00 W 00:29:57.457 Non-Operational State: Operational 00:29:57.457 Entry Latency: Not Reported 00:29:57.457 Exit Latency: Not Reported 00:29:57.457 Relative Read Throughput: 0 00:29:57.457 Relative Read Latency: 0 00:29:57.457 Relative Write Throughput: 0 00:29:57.457 Relative Write Latency: 0 00:29:57.457 Idle Power: Not Reported 00:29:57.457 Active Power: Not Reported 00:29:57.457 Non-Operational Permissive Mode: Not Supported 00:29:57.457 00:29:57.457 Health Information 00:29:57.457 ================== 00:29:57.457 Critical Warnings: 00:29:57.457 Available Spare Space: OK 00:29:57.457 Temperature: OK 00:29:57.457 Device Reliability: OK 00:29:57.457 Read Only: No 00:29:57.457 Volatile Memory Backup: OK 00:29:57.457 Current Temperature: 0 Kelvin (-273 Celsius) 00:29:57.457 Temperature Threshold: [2024-05-14 04:27:11.941828] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:57.457 [2024-05-14 04:27:11.941833] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:57.457 [2024-05-14 04:27:11.941841] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x613000001fc0) 00:29:57.457 [2024-05-14 04:27:11.941850] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.457 [2024-05-14 04:27:11.941860] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001baa0, cid 7, qid 0 00:29:57.457 [2024-05-14 04:27:11.941955] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:57.457 [2024-05-14 04:27:11.941962] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:57.457 [2024-05-14 04:27:11.941966] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:57.457 [2024-05-14 04:27:11.941974] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001baa0) on tqpair=0x613000001fc0 00:29:57.457 [2024-05-14 04:27:11.942011] nvme_ctrlr.c:4206:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Prepare to destruct SSD 00:29:57.457 [2024-05-14 04:27:11.942023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.457 [2024-05-14 04:27:11.942031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.457 [2024-05-14 04:27:11.942039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.457 [2024-05-14 04:27:11.942046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.457 [2024-05-14 04:27:11.942055] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:57.457 [2024-05-14 04:27:11.942061] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:57.457 [2024-05-14 04:27:11.942066] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x613000001fc0) 00:29:57.457 [2024-05-14 04:27:11.942076] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.457 [2024-05-14 04:27:11.942087] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b520, cid 3, qid 0 00:29:57.457 [2024-05-14 04:27:11.942172] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:57.457 [2024-05-14 04:27:11.942179] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:57.457 [2024-05-14 04:27:11.946188] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:57.457 [2024-05-14 04:27:11.946196] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b520) on tqpair=0x613000001fc0 00:29:57.457 [2024-05-14 04:27:11.946206] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:57.457 [2024-05-14 04:27:11.946212] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:57.457 [2024-05-14 04:27:11.946219] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x613000001fc0) 00:29:57.457 [2024-05-14 04:27:11.946228] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.457 [2024-05-14 04:27:11.946242] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b520, cid 3, qid 0 00:29:57.457 [2024-05-14 04:27:11.946341] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:57.457 [2024-05-14 04:27:11.946348] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:57.457 [2024-05-14 04:27:11.946352] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:57.457 [2024-05-14 04:27:11.946357] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b520) on tqpair=0x613000001fc0 00:29:57.457 [2024-05-14 04:27:11.946363] nvme_ctrlr.c:1069:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] RTD3E = 0 us 00:29:57.458 [2024-05-14 04:27:11.946369] nvme_ctrlr.c:1072:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown timeout = 10000 ms 00:29:57.458 [2024-05-14 04:27:11.946380] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:57.458 [2024-05-14 04:27:11.946385] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:57.458 [2024-05-14 04:27:11.946390] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x613000001fc0) 00:29:57.458 [2024-05-14 04:27:11.946400] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.458 [2024-05-14 04:27:11.946410] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b520, cid 3, qid 0 00:29:57.458 [2024-05-14 04:27:11.946490] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:57.458 [2024-05-14 04:27:11.946496] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:57.458 [2024-05-14 04:27:11.946500] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:57.458 [2024-05-14 04:27:11.946504] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b520) on tqpair=0x613000001fc0 00:29:57.458 [2024-05-14 04:27:11.946514] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:57.458 [2024-05-14 04:27:11.946519] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:57.458 [2024-05-14 04:27:11.946523] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x613000001fc0) 00:29:57.458 [2024-05-14 04:27:11.946531] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.458 [2024-05-14 04:27:11.946542] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b520, cid 3, qid 0 00:29:57.458 [2024-05-14 04:27:11.946616] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:57.458 [2024-05-14 04:27:11.946622] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:57.458 [2024-05-14 04:27:11.946627] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:57.458 [2024-05-14 04:27:11.946631] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b520) on tqpair=0x613000001fc0 00:29:57.458 [2024-05-14 04:27:11.946640] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:57.458 [2024-05-14 04:27:11.946644] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:57.458 [2024-05-14 04:27:11.946648] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x613000001fc0) 00:29:57.458 [2024-05-14 04:27:11.946656] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.458 [2024-05-14 04:27:11.946665] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b520, cid 3, qid 0 00:29:57.458 [2024-05-14 04:27:11.946742] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:57.458 [2024-05-14 04:27:11.946748] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:57.458 [2024-05-14 04:27:11.946752] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:57.458 [2024-05-14 04:27:11.946756] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b520) on tqpair=0x613000001fc0 00:29:57.458 [2024-05-14 04:27:11.946766] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:57.458 [2024-05-14 04:27:11.946769] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:57.458 [2024-05-14 04:27:11.946774] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x613000001fc0) 00:29:57.458 [2024-05-14 04:27:11.946782] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.458 [2024-05-14 04:27:11.946791] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b520, cid 3, qid 0 00:29:57.458 [2024-05-14 04:27:11.946875] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:57.458 [2024-05-14 04:27:11.946881] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:57.458 [2024-05-14 04:27:11.946885] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:57.458 [2024-05-14 04:27:11.946889] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b520) on tqpair=0x613000001fc0 00:29:57.458 [2024-05-14 04:27:11.946898] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:57.458 [2024-05-14 04:27:11.946902] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:57.458 [2024-05-14 04:27:11.946907] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x613000001fc0) 00:29:57.458 [2024-05-14 04:27:11.946922] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.458 [2024-05-14 04:27:11.946931] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b520, cid 3, qid 0 00:29:57.458 [2024-05-14 04:27:11.947017] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:57.458 [2024-05-14 04:27:11.947023] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:57.458 [2024-05-14 04:27:11.947027] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:57.458 [2024-05-14 04:27:11.947032] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b520) on tqpair=0x613000001fc0 00:29:57.458 [2024-05-14 04:27:11.947041] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:57.458 [2024-05-14 04:27:11.947045] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:57.458 [2024-05-14 04:27:11.947050] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x613000001fc0) 00:29:57.458 [2024-05-14 04:27:11.947058] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.458 [2024-05-14 04:27:11.947068] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b520, cid 3, qid 0 00:29:57.458 [2024-05-14 04:27:11.947149] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:57.458 [2024-05-14 04:27:11.947155] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:57.458 [2024-05-14 04:27:11.947159] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:57.458 [2024-05-14 04:27:11.947163] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b520) on tqpair=0x613000001fc0 00:29:57.458 [2024-05-14 04:27:11.947172] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:57.458 [2024-05-14 04:27:11.947176] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:57.458 [2024-05-14 04:27:11.947181] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x613000001fc0) 00:29:57.458 [2024-05-14 04:27:11.947192] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.458 [2024-05-14 04:27:11.947201] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b520, cid 3, qid 0 00:29:57.458 [2024-05-14 04:27:11.947286] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:57.458 [2024-05-14 04:27:11.947292] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:57.458 [2024-05-14 04:27:11.947296] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:57.458 [2024-05-14 04:27:11.947300] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b520) on tqpair=0x613000001fc0 00:29:57.458 [2024-05-14 04:27:11.947310] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:57.458 [2024-05-14 04:27:11.947314] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:57.458 [2024-05-14 04:27:11.947318] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x613000001fc0) 00:29:57.458 [2024-05-14 04:27:11.947326] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.458 [2024-05-14 04:27:11.947335] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b520, cid 3, qid 0 00:29:57.458 [2024-05-14 04:27:11.947410] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:57.458 [2024-05-14 04:27:11.947416] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:57.458 [2024-05-14 04:27:11.947420] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:57.458 [2024-05-14 04:27:11.947425] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b520) on tqpair=0x613000001fc0 00:29:57.458 [2024-05-14 04:27:11.947434] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:57.458 [2024-05-14 04:27:11.947438] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:57.458 [2024-05-14 04:27:11.947443] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x613000001fc0) 00:29:57.458 [2024-05-14 04:27:11.947450] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.458 [2024-05-14 04:27:11.947459] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b520, cid 3, qid 0 00:29:57.458 [2024-05-14 04:27:11.947532] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:57.458 [2024-05-14 04:27:11.947538] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:57.458 [2024-05-14 04:27:11.947542] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:57.458 [2024-05-14 04:27:11.947546] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b520) on tqpair=0x613000001fc0 00:29:57.458 [2024-05-14 04:27:11.947557] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:57.458 [2024-05-14 04:27:11.947561] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:57.458 [2024-05-14 04:27:11.947566] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x613000001fc0) 00:29:57.458 [2024-05-14 04:27:11.947574] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.459 [2024-05-14 04:27:11.947584] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b520, cid 3, qid 0 00:29:57.459 [2024-05-14 04:27:11.947669] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:57.459 [2024-05-14 04:27:11.947676] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:57.459 [2024-05-14 04:27:11.947680] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:57.459 [2024-05-14 04:27:11.947684] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b520) on tqpair=0x613000001fc0 00:29:57.459 [2024-05-14 04:27:11.947693] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:57.459 [2024-05-14 04:27:11.947697] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:57.459 [2024-05-14 04:27:11.947701] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x613000001fc0) 00:29:57.459 [2024-05-14 04:27:11.947709] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.459 [2024-05-14 04:27:11.947719] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b520, cid 3, qid 0 00:29:57.459 [2024-05-14 04:27:11.947800] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:57.459 [2024-05-14 04:27:11.947806] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:57.459 [2024-05-14 04:27:11.947810] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:57.459 [2024-05-14 04:27:11.947814] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b520) on tqpair=0x613000001fc0 00:29:57.459 [2024-05-14 04:27:11.947824] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:57.459 [2024-05-14 04:27:11.947828] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:57.459 [2024-05-14 04:27:11.947832] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x613000001fc0) 00:29:57.459 [2024-05-14 04:27:11.947840] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.459 [2024-05-14 04:27:11.947850] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b520, cid 3, qid 0 00:29:57.459 [2024-05-14 04:27:11.947924] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:57.459 [2024-05-14 04:27:11.947930] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:57.459 [2024-05-14 04:27:11.947934] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:57.459 [2024-05-14 04:27:11.947938] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b520) on tqpair=0x613000001fc0 00:29:57.459 [2024-05-14 04:27:11.947948] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:57.459 [2024-05-14 04:27:11.947952] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:57.459 [2024-05-14 04:27:11.947956] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x613000001fc0) 00:29:57.459 [2024-05-14 04:27:11.947965] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.459 [2024-05-14 04:27:11.947974] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b520, cid 3, qid 0 00:29:57.459 [2024-05-14 04:27:11.948047] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:57.459 [2024-05-14 04:27:11.948053] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:57.459 [2024-05-14 04:27:11.948057] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:57.459 [2024-05-14 04:27:11.948062] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b520) on tqpair=0x613000001fc0 00:29:57.459 [2024-05-14 04:27:11.948071] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:57.459 [2024-05-14 04:27:11.948075] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:57.459 [2024-05-14 04:27:11.948079] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x613000001fc0) 00:29:57.459 [2024-05-14 04:27:11.948087] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.459 [2024-05-14 04:27:11.948098] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b520, cid 3, qid 0 00:29:57.459 [2024-05-14 04:27:11.948181] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:57.459 [2024-05-14 04:27:11.948194] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:57.459 [2024-05-14 04:27:11.948198] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:57.459 [2024-05-14 04:27:11.948202] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b520) on tqpair=0x613000001fc0 00:29:57.459 [2024-05-14 04:27:11.948211] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:57.459 [2024-05-14 04:27:11.948215] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:57.459 [2024-05-14 04:27:11.948219] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x613000001fc0) 00:29:57.459 [2024-05-14 04:27:11.948226] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.459 [2024-05-14 04:27:11.948235] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b520, cid 3, qid 0 00:29:57.459 [2024-05-14 04:27:11.948319] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:57.459 [2024-05-14 04:27:11.948325] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:57.459 [2024-05-14 04:27:11.948329] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:57.459 [2024-05-14 04:27:11.948334] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b520) on tqpair=0x613000001fc0 00:29:57.459 [2024-05-14 04:27:11.948343] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:57.459 [2024-05-14 04:27:11.948347] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:57.459 [2024-05-14 04:27:11.948351] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x613000001fc0) 00:29:57.459 [2024-05-14 04:27:11.948358] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.459 [2024-05-14 04:27:11.948368] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b520, cid 3, qid 0 00:29:57.459 [2024-05-14 04:27:11.948453] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:57.459 [2024-05-14 04:27:11.948460] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:57.459 [2024-05-14 04:27:11.948464] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:57.459 [2024-05-14 04:27:11.948468] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b520) on tqpair=0x613000001fc0 00:29:57.459 [2024-05-14 04:27:11.948478] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:57.459 [2024-05-14 04:27:11.948482] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:57.459 [2024-05-14 04:27:11.948486] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x613000001fc0) 00:29:57.459 [2024-05-14 04:27:11.948494] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.459 [2024-05-14 04:27:11.948503] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b520, cid 3, qid 0 00:29:57.459 [2024-05-14 04:27:11.948588] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:57.459 [2024-05-14 04:27:11.948594] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:57.459 [2024-05-14 04:27:11.948598] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:57.459 [2024-05-14 04:27:11.948602] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b520) on tqpair=0x613000001fc0 00:29:57.459 [2024-05-14 04:27:11.948612] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:57.459 [2024-05-14 04:27:11.948616] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:57.459 [2024-05-14 04:27:11.948620] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x613000001fc0) 00:29:57.459 [2024-05-14 04:27:11.948628] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.459 [2024-05-14 04:27:11.948638] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b520, cid 3, qid 0 00:29:57.459 [2024-05-14 04:27:11.948717] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:57.459 [2024-05-14 04:27:11.948723] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:57.459 [2024-05-14 04:27:11.948727] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:57.459 [2024-05-14 04:27:11.948731] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b520) on tqpair=0x613000001fc0 00:29:57.459 [2024-05-14 04:27:11.948740] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:57.459 [2024-05-14 04:27:11.948745] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:57.459 [2024-05-14 04:27:11.948749] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x613000001fc0) 00:29:57.459 [2024-05-14 04:27:11.948757] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.459 [2024-05-14 04:27:11.948766] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b520, cid 3, qid 0 00:29:57.459 [2024-05-14 04:27:11.948837] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:57.459 [2024-05-14 04:27:11.948843] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:57.459 [2024-05-14 04:27:11.948847] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:57.459 [2024-05-14 04:27:11.948851] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b520) on tqpair=0x613000001fc0 00:29:57.459 [2024-05-14 04:27:11.948860] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:57.459 [2024-05-14 04:27:11.948864] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:57.459 [2024-05-14 04:27:11.948868] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x613000001fc0) 00:29:57.459 [2024-05-14 04:27:11.948876] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.459 [2024-05-14 04:27:11.948886] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b520, cid 3, qid 0 00:29:57.459 [2024-05-14 04:27:11.948965] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:57.459 [2024-05-14 04:27:11.948972] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:57.459 [2024-05-14 04:27:11.948976] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:57.459 [2024-05-14 04:27:11.948980] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b520) on tqpair=0x613000001fc0 00:29:57.459 [2024-05-14 04:27:11.948989] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:57.459 [2024-05-14 04:27:11.948993] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:57.459 [2024-05-14 04:27:11.948997] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x613000001fc0) 00:29:57.459 [2024-05-14 04:27:11.949006] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.459 [2024-05-14 04:27:11.949015] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b520, cid 3, qid 0 00:29:57.460 [2024-05-14 04:27:11.949089] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:57.460 [2024-05-14 04:27:11.949095] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:57.460 [2024-05-14 04:27:11.949099] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:57.460 [2024-05-14 04:27:11.949103] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b520) on tqpair=0x613000001fc0 00:29:57.460 [2024-05-14 04:27:11.949112] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:57.460 [2024-05-14 04:27:11.949116] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:57.460 [2024-05-14 04:27:11.949121] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x613000001fc0) 00:29:57.460 [2024-05-14 04:27:11.949129] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.460 [2024-05-14 04:27:11.949139] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b520, cid 3, qid 0 00:29:57.460 [2024-05-14 04:27:11.949217] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:57.460 [2024-05-14 04:27:11.949224] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:57.460 [2024-05-14 04:27:11.949228] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:57.460 [2024-05-14 04:27:11.949232] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b520) on tqpair=0x613000001fc0 00:29:57.460 [2024-05-14 04:27:11.949241] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:57.460 [2024-05-14 04:27:11.949245] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:57.460 [2024-05-14 04:27:11.949250] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x613000001fc0) 00:29:57.460 [2024-05-14 04:27:11.949257] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.460 [2024-05-14 04:27:11.949267] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b520, cid 3, qid 0 00:29:57.460 [2024-05-14 04:27:11.949349] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:57.460 [2024-05-14 04:27:11.949355] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:57.460 [2024-05-14 04:27:11.949359] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:57.460 [2024-05-14 04:27:11.949363] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b520) on tqpair=0x613000001fc0 00:29:57.460 [2024-05-14 04:27:11.949373] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:57.460 [2024-05-14 04:27:11.949377] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:57.460 [2024-05-14 04:27:11.949381] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x613000001fc0) 00:29:57.460 [2024-05-14 04:27:11.949389] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.460 [2024-05-14 04:27:11.949399] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b520, cid 3, qid 0 00:29:57.460 [2024-05-14 04:27:11.949484] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:57.460 [2024-05-14 04:27:11.949490] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:57.460 [2024-05-14 04:27:11.949494] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:57.460 [2024-05-14 04:27:11.949498] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b520) on tqpair=0x613000001fc0 00:29:57.460 [2024-05-14 04:27:11.949508] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:57.460 [2024-05-14 04:27:11.949512] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:57.460 [2024-05-14 04:27:11.949516] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x613000001fc0) 00:29:57.460 [2024-05-14 04:27:11.949524] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.460 [2024-05-14 04:27:11.949533] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b520, cid 3, qid 0 00:29:57.460 [2024-05-14 04:27:11.949612] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:57.460 [2024-05-14 04:27:11.949618] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:57.460 [2024-05-14 04:27:11.949622] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:57.460 [2024-05-14 04:27:11.949627] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b520) on tqpair=0x613000001fc0 00:29:57.460 [2024-05-14 04:27:11.949636] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:57.460 [2024-05-14 04:27:11.949640] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:57.460 [2024-05-14 04:27:11.949644] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x613000001fc0) 00:29:57.460 [2024-05-14 04:27:11.949652] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.460 [2024-05-14 04:27:11.949662] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b520, cid 3, qid 0 00:29:57.460 [2024-05-14 04:27:11.949737] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:57.460 [2024-05-14 04:27:11.949744] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:57.460 [2024-05-14 04:27:11.949747] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:57.460 [2024-05-14 04:27:11.949752] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b520) on tqpair=0x613000001fc0 00:29:57.460 [2024-05-14 04:27:11.949761] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:57.460 [2024-05-14 04:27:11.949765] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:57.460 [2024-05-14 04:27:11.949769] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x613000001fc0) 00:29:57.460 [2024-05-14 04:27:11.949777] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.460 [2024-05-14 04:27:11.949786] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b520, cid 3, qid 0 00:29:57.460 [2024-05-14 04:27:11.949864] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:57.460 [2024-05-14 04:27:11.949870] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:57.460 [2024-05-14 04:27:11.949874] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:57.460 [2024-05-14 04:27:11.949878] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b520) on tqpair=0x613000001fc0 00:29:57.460 [2024-05-14 04:27:11.949888] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:57.460 [2024-05-14 04:27:11.949892] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:57.460 [2024-05-14 04:27:11.949896] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x613000001fc0) 00:29:57.460 [2024-05-14 04:27:11.949904] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.460 [2024-05-14 04:27:11.949914] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b520, cid 3, qid 0 00:29:57.460 [2024-05-14 04:27:11.949987] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:57.460 [2024-05-14 04:27:11.949993] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:57.460 [2024-05-14 04:27:11.949997] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:57.460 [2024-05-14 04:27:11.950002] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b520) on tqpair=0x613000001fc0 00:29:57.460 [2024-05-14 04:27:11.950011] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:57.460 [2024-05-14 04:27:11.950015] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:57.460 [2024-05-14 04:27:11.950019] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x613000001fc0) 00:29:57.460 [2024-05-14 04:27:11.950028] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.460 [2024-05-14 04:27:11.950037] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b520, cid 3, qid 0 00:29:57.460 [2024-05-14 04:27:11.950111] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:57.460 [2024-05-14 04:27:11.950117] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:57.460 [2024-05-14 04:27:11.950121] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:57.460 [2024-05-14 04:27:11.950125] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b520) on tqpair=0x613000001fc0 00:29:57.460 [2024-05-14 04:27:11.950135] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:57.460 [2024-05-14 04:27:11.950139] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:57.460 [2024-05-14 04:27:11.950143] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x613000001fc0) 00:29:57.460 [2024-05-14 04:27:11.950151] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.460 [2024-05-14 04:27:11.950162] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b520, cid 3, qid 0 00:29:57.460 [2024-05-14 04:27:11.954193] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:57.460 [2024-05-14 04:27:11.954211] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:57.460 [2024-05-14 04:27:11.954215] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:57.460 [2024-05-14 04:27:11.954220] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b520) on tqpair=0x613000001fc0 00:29:57.460 [2024-05-14 04:27:11.954230] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:57.460 [2024-05-14 04:27:11.954234] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:57.460 [2024-05-14 04:27:11.954238] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x613000001fc0) 00:29:57.460 [2024-05-14 04:27:11.954247] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.460 [2024-05-14 04:27:11.954257] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b520, cid 3, qid 0 00:29:57.460 [2024-05-14 04:27:11.954338] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:57.460 [2024-05-14 04:27:11.954344] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:57.460 [2024-05-14 04:27:11.954348] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:57.460 [2024-05-14 04:27:11.954353] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b520) on tqpair=0x613000001fc0 00:29:57.460 [2024-05-14 04:27:11.954360] nvme_ctrlr.c:1191:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown complete in 7 milliseconds 00:29:57.460 0 Kelvin (-273 Celsius) 00:29:57.460 Available Spare: 0% 00:29:57.460 Available Spare Threshold: 0% 00:29:57.460 Life Percentage Used: 0% 00:29:57.460 Data Units Read: 0 00:29:57.460 Data Units Written: 0 00:29:57.460 Host Read Commands: 0 00:29:57.460 Host Write Commands: 0 00:29:57.460 Controller Busy Time: 0 minutes 00:29:57.460 Power Cycles: 0 00:29:57.460 Power On Hours: 0 hours 00:29:57.460 Unsafe Shutdowns: 0 00:29:57.460 Unrecoverable Media Errors: 0 00:29:57.460 Lifetime Error Log Entries: 0 00:29:57.460 Warning Temperature Time: 0 minutes 00:29:57.460 Critical Temperature Time: 0 minutes 00:29:57.460 00:29:57.460 Number of Queues 00:29:57.460 ================ 00:29:57.460 Number of I/O Submission Queues: 127 00:29:57.460 Number of I/O Completion Queues: 127 00:29:57.460 00:29:57.460 Active Namespaces 00:29:57.460 ================= 00:29:57.460 Namespace ID:1 00:29:57.460 Error Recovery Timeout: Unlimited 00:29:57.460 Command Set Identifier: NVM (00h) 00:29:57.460 Deallocate: Supported 00:29:57.460 Deallocated/Unwritten Error: Not Supported 00:29:57.461 Deallocated Read Value: Unknown 00:29:57.461 Deallocate in Write Zeroes: Not Supported 00:29:57.461 Deallocated Guard Field: 0xFFFF 00:29:57.461 Flush: Supported 00:29:57.461 Reservation: Supported 00:29:57.461 Namespace Sharing Capabilities: Multiple Controllers 00:29:57.461 Size (in LBAs): 131072 (0GiB) 00:29:57.461 Capacity (in LBAs): 131072 (0GiB) 00:29:57.461 Utilization (in LBAs): 131072 (0GiB) 00:29:57.461 NGUID: ABCDEF0123456789ABCDEF0123456789 00:29:57.461 EUI64: ABCDEF0123456789 00:29:57.461 UUID: 1a90f777-f6bf-43f8-83e4-166b941f8bca 00:29:57.461 Thin Provisioning: Not Supported 00:29:57.461 Per-NS Atomic Units: Yes 00:29:57.461 Atomic Boundary Size (Normal): 0 00:29:57.461 Atomic Boundary Size (PFail): 0 00:29:57.461 Atomic Boundary Offset: 0 00:29:57.461 Maximum Single Source Range Length: 65535 00:29:57.461 Maximum Copy Length: 65535 00:29:57.461 Maximum Source Range Count: 1 00:29:57.461 NGUID/EUI64 Never Reused: No 00:29:57.461 Namespace Write Protected: No 00:29:57.461 Number of LBA Formats: 1 00:29:57.461 Current LBA Format: LBA Format #00 00:29:57.461 LBA Format #00: Data Size: 512 Metadata Size: 0 00:29:57.461 00:29:57.461 04:27:11 -- host/identify.sh@51 -- # sync 00:29:57.461 04:27:11 -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:29:57.461 04:27:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:57.461 04:27:11 -- common/autotest_common.sh@10 -- # set +x 00:29:57.461 04:27:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:57.461 04:27:11 -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:29:57.461 04:27:11 -- host/identify.sh@56 -- # nvmftestfini 00:29:57.461 04:27:11 -- nvmf/common.sh@476 -- # nvmfcleanup 00:29:57.461 04:27:11 -- nvmf/common.sh@116 -- # sync 00:29:57.461 04:27:12 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:29:57.461 04:27:12 -- nvmf/common.sh@119 -- # set +e 00:29:57.461 04:27:12 -- nvmf/common.sh@120 -- # for i in {1..20} 00:29:57.461 04:27:12 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:29:57.461 rmmod nvme_tcp 00:29:57.461 rmmod nvme_fabrics 00:29:57.461 rmmod nvme_keyring 00:29:57.722 04:27:12 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:29:57.722 04:27:12 -- nvmf/common.sh@123 -- # set -e 00:29:57.722 04:27:12 -- nvmf/common.sh@124 -- # return 0 00:29:57.722 04:27:12 -- nvmf/common.sh@477 -- # '[' -n 4180385 ']' 00:29:57.722 04:27:12 -- nvmf/common.sh@478 -- # killprocess 4180385 00:29:57.722 04:27:12 -- common/autotest_common.sh@926 -- # '[' -z 4180385 ']' 00:29:57.722 04:27:12 -- common/autotest_common.sh@930 -- # kill -0 4180385 00:29:57.722 04:27:12 -- common/autotest_common.sh@931 -- # uname 00:29:57.722 04:27:12 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:29:57.722 04:27:12 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 4180385 00:29:57.722 04:27:12 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:29:57.722 04:27:12 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:29:57.722 04:27:12 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 4180385' 00:29:57.722 killing process with pid 4180385 00:29:57.722 04:27:12 -- common/autotest_common.sh@945 -- # kill 4180385 00:29:57.722 [2024-05-14 04:27:12.093820] app.c: 883:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:29:57.722 04:27:12 -- common/autotest_common.sh@950 -- # wait 4180385 00:29:58.291 04:27:12 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:29:58.291 04:27:12 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:29:58.291 04:27:12 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:29:58.291 04:27:12 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:29:58.291 04:27:12 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:29:58.291 04:27:12 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:58.291 04:27:12 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:29:58.291 04:27:12 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:00.193 04:27:14 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:30:00.193 00:30:00.193 real 0m9.737s 00:30:00.193 user 0m8.332s 00:30:00.193 sys 0m4.580s 00:30:00.193 04:27:14 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:30:00.193 04:27:14 -- common/autotest_common.sh@10 -- # set +x 00:30:00.193 ************************************ 00:30:00.193 END TEST nvmf_identify 00:30:00.193 ************************************ 00:30:00.193 04:27:14 -- nvmf/nvmf.sh@97 -- # run_test nvmf_perf /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:30:00.193 04:27:14 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:30:00.193 04:27:14 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:30:00.193 04:27:14 -- common/autotest_common.sh@10 -- # set +x 00:30:00.193 ************************************ 00:30:00.193 START TEST nvmf_perf 00:30:00.193 ************************************ 00:30:00.193 04:27:14 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:30:00.453 * Looking for test storage... 00:30:00.453 * Found test storage at /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/host 00:30:00.453 04:27:14 -- host/perf.sh@10 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/common.sh 00:30:00.453 04:27:14 -- nvmf/common.sh@7 -- # uname -s 00:30:00.453 04:27:14 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:00.453 04:27:14 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:00.453 04:27:14 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:00.453 04:27:14 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:00.453 04:27:14 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:00.453 04:27:14 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:00.453 04:27:14 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:00.453 04:27:14 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:00.453 04:27:14 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:00.453 04:27:14 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:00.453 04:27:14 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80ef6226-405e-ee11-906e-a4bf01973fda 00:30:00.453 04:27:14 -- nvmf/common.sh@18 -- # NVME_HOSTID=80ef6226-405e-ee11-906e-a4bf01973fda 00:30:00.453 04:27:14 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:00.453 04:27:14 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:00.453 04:27:14 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:30:00.453 04:27:14 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/common.sh 00:30:00.453 04:27:14 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:00.453 04:27:14 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:00.453 04:27:14 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:00.453 04:27:14 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:00.453 04:27:14 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:00.453 04:27:14 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:00.453 04:27:14 -- paths/export.sh@5 -- # export PATH 00:30:00.453 04:27:14 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:00.453 04:27:14 -- nvmf/common.sh@46 -- # : 0 00:30:00.453 04:27:14 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:30:00.453 04:27:14 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:30:00.453 04:27:14 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:30:00.453 04:27:14 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:00.453 04:27:14 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:00.453 04:27:14 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:30:00.453 04:27:14 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:30:00.453 04:27:14 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:30:00.453 04:27:14 -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:30:00.453 04:27:14 -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:30:00.453 04:27:14 -- host/perf.sh@15 -- # rpc_py=/var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py 00:30:00.453 04:27:14 -- host/perf.sh@17 -- # nvmftestinit 00:30:00.453 04:27:14 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:30:00.453 04:27:14 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:00.453 04:27:14 -- nvmf/common.sh@436 -- # prepare_net_devs 00:30:00.453 04:27:14 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:30:00.453 04:27:14 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:30:00.453 04:27:14 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:00.453 04:27:14 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:30:00.453 04:27:14 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:00.453 04:27:14 -- nvmf/common.sh@402 -- # [[ phy-fallback != virt ]] 00:30:00.453 04:27:14 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:30:00.453 04:27:14 -- nvmf/common.sh@284 -- # xtrace_disable 00:30:00.453 04:27:14 -- common/autotest_common.sh@10 -- # set +x 00:30:05.794 04:27:19 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:30:05.794 04:27:19 -- nvmf/common.sh@290 -- # pci_devs=() 00:30:05.794 04:27:19 -- nvmf/common.sh@290 -- # local -a pci_devs 00:30:05.794 04:27:19 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:30:05.794 04:27:19 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:30:05.794 04:27:19 -- nvmf/common.sh@292 -- # pci_drivers=() 00:30:05.794 04:27:19 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:30:05.794 04:27:19 -- nvmf/common.sh@294 -- # net_devs=() 00:30:05.794 04:27:19 -- nvmf/common.sh@294 -- # local -ga net_devs 00:30:05.794 04:27:19 -- nvmf/common.sh@295 -- # e810=() 00:30:05.794 04:27:19 -- nvmf/common.sh@295 -- # local -ga e810 00:30:05.794 04:27:19 -- nvmf/common.sh@296 -- # x722=() 00:30:05.794 04:27:19 -- nvmf/common.sh@296 -- # local -ga x722 00:30:05.794 04:27:19 -- nvmf/common.sh@297 -- # mlx=() 00:30:05.794 04:27:19 -- nvmf/common.sh@297 -- # local -ga mlx 00:30:05.794 04:27:19 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:05.794 04:27:19 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:05.794 04:27:19 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:05.794 04:27:19 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:05.794 04:27:19 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:05.794 04:27:19 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:05.794 04:27:19 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:05.794 04:27:19 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:05.794 04:27:19 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:05.794 04:27:19 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:05.794 04:27:19 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:05.794 04:27:19 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:30:05.794 04:27:19 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:30:05.794 04:27:19 -- nvmf/common.sh@326 -- # [[ '' == mlx5 ]] 00:30:05.794 04:27:19 -- nvmf/common.sh@328 -- # [[ '' == e810 ]] 00:30:05.794 04:27:19 -- nvmf/common.sh@330 -- # [[ '' == x722 ]] 00:30:05.794 04:27:19 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:30:05.794 04:27:19 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:30:05.794 04:27:19 -- nvmf/common.sh@340 -- # echo 'Found 0000:27:00.0 (0x8086 - 0x159b)' 00:30:05.794 Found 0000:27:00.0 (0x8086 - 0x159b) 00:30:05.794 04:27:19 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:30:05.794 04:27:19 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:30:05.794 04:27:19 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:05.794 04:27:19 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:05.794 04:27:19 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:30:05.794 04:27:19 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:30:05.794 04:27:19 -- nvmf/common.sh@340 -- # echo 'Found 0000:27:00.1 (0x8086 - 0x159b)' 00:30:05.794 Found 0000:27:00.1 (0x8086 - 0x159b) 00:30:05.794 04:27:19 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:30:05.794 04:27:19 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:30:05.794 04:27:19 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:05.794 04:27:19 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:05.794 04:27:19 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:30:05.794 04:27:19 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:30:05.794 04:27:19 -- nvmf/common.sh@371 -- # [[ '' == e810 ]] 00:30:05.794 04:27:19 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:30:05.794 04:27:19 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:05.794 04:27:19 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:30:05.794 04:27:19 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:05.794 04:27:19 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:27:00.0: cvl_0_0' 00:30:05.794 Found net devices under 0000:27:00.0: cvl_0_0 00:30:05.794 04:27:19 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:30:05.794 04:27:19 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:30:05.794 04:27:19 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:05.794 04:27:19 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:30:05.794 04:27:19 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:05.794 04:27:19 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:27:00.1: cvl_0_1' 00:30:05.794 Found net devices under 0000:27:00.1: cvl_0_1 00:30:05.794 04:27:19 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:30:05.794 04:27:19 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:30:05.794 04:27:19 -- nvmf/common.sh@402 -- # is_hw=yes 00:30:05.794 04:27:19 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:30:05.794 04:27:19 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:30:05.794 04:27:19 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:30:05.794 04:27:19 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:05.794 04:27:19 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:05.794 04:27:19 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:05.794 04:27:19 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:30:05.794 04:27:19 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:05.794 04:27:19 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:05.794 04:27:19 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:30:05.794 04:27:19 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:05.794 04:27:19 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:05.794 04:27:19 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:30:05.794 04:27:19 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:30:05.794 04:27:19 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:30:05.794 04:27:19 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:05.794 04:27:20 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:05.794 04:27:20 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:05.794 04:27:20 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:30:05.794 04:27:20 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:05.794 04:27:20 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:05.794 04:27:20 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:05.794 04:27:20 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:30:05.794 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:05.794 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.591 ms 00:30:05.794 00:30:05.794 --- 10.0.0.2 ping statistics --- 00:30:05.794 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:05.794 rtt min/avg/max/mdev = 0.591/0.591/0.591/0.000 ms 00:30:05.794 04:27:20 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:05.794 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:05.794 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.242 ms 00:30:05.794 00:30:05.795 --- 10.0.0.1 ping statistics --- 00:30:05.795 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:05.795 rtt min/avg/max/mdev = 0.242/0.242/0.242/0.000 ms 00:30:05.795 04:27:20 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:05.795 04:27:20 -- nvmf/common.sh@410 -- # return 0 00:30:05.795 04:27:20 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:30:05.795 04:27:20 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:05.795 04:27:20 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:30:05.795 04:27:20 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:30:05.795 04:27:20 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:05.795 04:27:20 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:30:05.795 04:27:20 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:30:05.795 04:27:20 -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:30:05.795 04:27:20 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:30:05.795 04:27:20 -- common/autotest_common.sh@712 -- # xtrace_disable 00:30:05.795 04:27:20 -- common/autotest_common.sh@10 -- # set +x 00:30:05.795 04:27:20 -- nvmf/common.sh@469 -- # nvmfpid=4184613 00:30:05.795 04:27:20 -- nvmf/common.sh@470 -- # waitforlisten 4184613 00:30:05.795 04:27:20 -- common/autotest_common.sh@819 -- # '[' -z 4184613 ']' 00:30:05.795 04:27:20 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:30:05.795 04:27:20 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:05.795 04:27:20 -- common/autotest_common.sh@824 -- # local max_retries=100 00:30:05.795 04:27:20 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:05.795 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:05.795 04:27:20 -- common/autotest_common.sh@828 -- # xtrace_disable 00:30:05.795 04:27:20 -- common/autotest_common.sh@10 -- # set +x 00:30:05.795 [2024-05-14 04:27:20.232478] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:30:05.795 [2024-05-14 04:27:20.232579] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:05.795 EAL: No free 2048 kB hugepages reported on node 1 00:30:05.795 [2024-05-14 04:27:20.351702] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:30:06.053 [2024-05-14 04:27:20.445560] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:30:06.053 [2024-05-14 04:27:20.445724] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:06.053 [2024-05-14 04:27:20.445736] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:06.053 [2024-05-14 04:27:20.445745] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:06.053 [2024-05-14 04:27:20.445823] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:30:06.053 [2024-05-14 04:27:20.445925] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:30:06.053 [2024-05-14 04:27:20.445951] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:30:06.053 [2024-05-14 04:27:20.445959] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:30:06.621 04:27:20 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:30:06.621 04:27:20 -- common/autotest_common.sh@852 -- # return 0 00:30:06.621 04:27:20 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:30:06.621 04:27:20 -- common/autotest_common.sh@718 -- # xtrace_disable 00:30:06.621 04:27:20 -- common/autotest_common.sh@10 -- # set +x 00:30:06.621 04:27:20 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:06.621 04:27:20 -- host/perf.sh@28 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/gen_nvme.sh 00:30:06.621 04:27:20 -- host/perf.sh@28 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:30:13.194 04:27:26 -- host/perf.sh@30 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py framework_get_config bdev 00:30:13.194 04:27:26 -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:30:13.194 04:27:26 -- host/perf.sh@30 -- # local_nvme_trid=0000:c9:00.0 00:30:13.194 04:27:26 -- host/perf.sh@31 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:30:13.194 04:27:27 -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:30:13.194 04:27:27 -- host/perf.sh@33 -- # '[' -n 0000:c9:00.0 ']' 00:30:13.194 04:27:27 -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:30:13.194 04:27:27 -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:30:13.194 04:27:27 -- host/perf.sh@42 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:30:13.194 [2024-05-14 04:27:27.231881] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:13.195 04:27:27 -- host/perf.sh@44 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:30:13.195 04:27:27 -- host/perf.sh@45 -- # for bdev in $bdevs 00:30:13.195 04:27:27 -- host/perf.sh@46 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:30:13.195 04:27:27 -- host/perf.sh@45 -- # for bdev in $bdevs 00:30:13.195 04:27:27 -- host/perf.sh@46 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:30:13.195 04:27:27 -- host/perf.sh@48 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:13.456 [2024-05-14 04:27:27.826628] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:13.456 04:27:27 -- host/perf.sh@49 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:30:13.456 04:27:28 -- host/perf.sh@52 -- # '[' -n 0000:c9:00.0 ']' 00:30:13.456 04:27:28 -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:c9:00.0' 00:30:13.456 04:27:28 -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:30:13.456 04:27:28 -- host/perf.sh@24 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:c9:00.0' 00:30:15.363 Initializing NVMe Controllers 00:30:15.363 Attached to NVMe Controller at 0000:c9:00.0 [8086:0a54] 00:30:15.363 Associating PCIE (0000:c9:00.0) NSID 1 with lcore 0 00:30:15.363 Initialization complete. Launching workers. 00:30:15.363 ======================================================== 00:30:15.363 Latency(us) 00:30:15.363 Device Information : IOPS MiB/s Average min max 00:30:15.363 PCIE (0000:c9:00.0) NSID 1 from core 0: 93028.26 363.39 343.61 39.23 6744.14 00:30:15.363 ======================================================== 00:30:15.363 Total : 93028.26 363.39 343.61 39.23 6744.14 00:30:15.363 00:30:15.363 04:27:29 -- host/perf.sh@56 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:30:15.363 EAL: No free 2048 kB hugepages reported on node 1 00:30:16.296 Initializing NVMe Controllers 00:30:16.296 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:30:16.296 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:30:16.296 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:30:16.296 Initialization complete. Launching workers. 00:30:16.296 ======================================================== 00:30:16.296 Latency(us) 00:30:16.296 Device Information : IOPS MiB/s Average min max 00:30:16.296 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 73.00 0.29 13903.49 148.35 45806.73 00:30:16.296 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 80.00 0.31 13086.40 4976.93 55997.60 00:30:16.296 ======================================================== 00:30:16.296 Total : 153.00 0.60 13476.25 148.35 55997.60 00:30:16.296 00:30:16.296 04:27:30 -- host/perf.sh@57 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:30:16.553 EAL: No free 2048 kB hugepages reported on node 1 00:30:17.927 Initializing NVMe Controllers 00:30:17.927 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:30:17.927 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:30:17.927 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:30:17.927 Initialization complete. Launching workers. 00:30:17.927 ======================================================== 00:30:17.927 Latency(us) 00:30:17.927 Device Information : IOPS MiB/s Average min max 00:30:17.927 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 11536.22 45.06 2774.15 375.39 7062.78 00:30:17.927 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 3931.05 15.36 8152.11 6432.16 15921.05 00:30:17.927 ======================================================== 00:30:17.927 Total : 15467.27 60.42 4140.98 375.39 15921.05 00:30:17.927 00:30:17.927 04:27:32 -- host/perf.sh@59 -- # [[ '' == \e\8\1\0 ]] 00:30:17.927 04:27:32 -- host/perf.sh@60 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:30:17.927 EAL: No free 2048 kB hugepages reported on node 1 00:30:20.452 Initializing NVMe Controllers 00:30:20.452 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:30:20.452 Controller IO queue size 128, less than required. 00:30:20.453 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:30:20.453 Controller IO queue size 128, less than required. 00:30:20.453 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:30:20.453 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:30:20.453 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:30:20.453 Initialization complete. Launching workers. 00:30:20.453 ======================================================== 00:30:20.453 Latency(us) 00:30:20.453 Device Information : IOPS MiB/s Average min max 00:30:20.453 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1459.82 364.96 90399.17 58197.82 160226.16 00:30:20.453 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 580.54 145.13 230988.64 88340.76 399586.25 00:30:20.453 ======================================================== 00:30:20.453 Total : 2040.36 510.09 130400.67 58197.82 399586.25 00:30:20.453 00:30:20.453 04:27:34 -- host/perf.sh@64 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0xf -P 4 00:30:20.453 EAL: No free 2048 kB hugepages reported on node 1 00:30:20.711 No valid NVMe controllers or AIO or URING devices found 00:30:20.711 Initializing NVMe Controllers 00:30:20.711 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:30:20.711 Controller IO queue size 128, less than required. 00:30:20.711 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:30:20.711 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:30:20.711 Controller IO queue size 128, less than required. 00:30:20.711 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:30:20.711 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 512. Removing this ns from test 00:30:20.711 WARNING: Some requested NVMe devices were skipped 00:30:20.711 04:27:35 -- host/perf.sh@65 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' --transport-stat 00:30:20.711 EAL: No free 2048 kB hugepages reported on node 1 00:30:23.995 Initializing NVMe Controllers 00:30:23.995 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:30:23.995 Controller IO queue size 128, less than required. 00:30:23.995 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:30:23.995 Controller IO queue size 128, less than required. 00:30:23.995 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:30:23.995 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:30:23.995 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:30:23.995 Initialization complete. Launching workers. 00:30:23.995 00:30:23.995 ==================== 00:30:23.995 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:30:23.995 TCP transport: 00:30:23.995 polls: 30766 00:30:23.995 idle_polls: 9464 00:30:23.995 sock_completions: 21302 00:30:23.995 nvme_completions: 5601 00:30:23.995 submitted_requests: 8587 00:30:23.995 queued_requests: 1 00:30:23.995 00:30:23.995 ==================== 00:30:23.995 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:30:23.995 TCP transport: 00:30:23.995 polls: 28692 00:30:23.995 idle_polls: 6761 00:30:23.995 sock_completions: 21931 00:30:23.995 nvme_completions: 5597 00:30:23.995 submitted_requests: 8539 00:30:23.995 queued_requests: 1 00:30:23.995 ======================================================== 00:30:23.995 Latency(us) 00:30:23.995 Device Information : IOPS MiB/s Average min max 00:30:23.995 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1462.54 365.63 89027.80 41276.67 157644.34 00:30:23.995 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 1461.54 365.38 89509.12 48782.06 199566.64 00:30:23.995 ======================================================== 00:30:23.995 Total : 2924.08 731.02 89268.38 41276.67 199566.64 00:30:23.995 00:30:23.995 04:27:37 -- host/perf.sh@66 -- # sync 00:30:23.995 04:27:37 -- host/perf.sh@67 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:30:23.995 04:27:38 -- host/perf.sh@69 -- # '[' 1 -eq 1 ']' 00:30:23.995 04:27:38 -- host/perf.sh@71 -- # '[' -n 0000:c9:00.0 ']' 00:30:23.995 04:27:38 -- host/perf.sh@72 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore Nvme0n1 lvs_0 00:30:30.566 04:27:44 -- host/perf.sh@72 -- # ls_guid=131a201a-d3a9-4598-a62a-fd19d6a7057d 00:30:30.566 04:27:44 -- host/perf.sh@73 -- # get_lvs_free_mb 131a201a-d3a9-4598-a62a-fd19d6a7057d 00:30:30.566 04:27:44 -- common/autotest_common.sh@1343 -- # local lvs_uuid=131a201a-d3a9-4598-a62a-fd19d6a7057d 00:30:30.566 04:27:44 -- common/autotest_common.sh@1344 -- # local lvs_info 00:30:30.566 04:27:44 -- common/autotest_common.sh@1345 -- # local fc 00:30:30.566 04:27:44 -- common/autotest_common.sh@1346 -- # local cs 00:30:30.566 04:27:44 -- common/autotest_common.sh@1347 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:30:30.566 04:27:44 -- common/autotest_common.sh@1347 -- # lvs_info='[ 00:30:30.566 { 00:30:30.566 "uuid": "131a201a-d3a9-4598-a62a-fd19d6a7057d", 00:30:30.566 "name": "lvs_0", 00:30:30.566 "base_bdev": "Nvme0n1", 00:30:30.566 "total_data_clusters": 476466, 00:30:30.566 "free_clusters": 476466, 00:30:30.566 "block_size": 512, 00:30:30.566 "cluster_size": 4194304 00:30:30.566 } 00:30:30.566 ]' 00:30:30.566 04:27:44 -- common/autotest_common.sh@1348 -- # jq '.[] | select(.uuid=="131a201a-d3a9-4598-a62a-fd19d6a7057d") .free_clusters' 00:30:30.566 04:27:44 -- common/autotest_common.sh@1348 -- # fc=476466 00:30:30.566 04:27:44 -- common/autotest_common.sh@1349 -- # jq '.[] | select(.uuid=="131a201a-d3a9-4598-a62a-fd19d6a7057d") .cluster_size' 00:30:30.566 04:27:44 -- common/autotest_common.sh@1349 -- # cs=4194304 00:30:30.566 04:27:44 -- common/autotest_common.sh@1352 -- # free_mb=1905864 00:30:30.566 04:27:44 -- common/autotest_common.sh@1353 -- # echo 1905864 00:30:30.566 1905864 00:30:30.566 04:27:44 -- host/perf.sh@77 -- # '[' 1905864 -gt 20480 ']' 00:30:30.566 04:27:44 -- host/perf.sh@78 -- # free_mb=20480 00:30:30.566 04:27:44 -- host/perf.sh@80 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 131a201a-d3a9-4598-a62a-fd19d6a7057d lbd_0 20480 00:30:30.566 04:27:44 -- host/perf.sh@80 -- # lb_guid=90ac1a89-a3a5-4fa2-80ac-18973285b645 00:30:30.566 04:27:44 -- host/perf.sh@83 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore 90ac1a89-a3a5-4fa2-80ac-18973285b645 lvs_n_0 00:30:32.525 04:27:46 -- host/perf.sh@83 -- # ls_nested_guid=3067254b-fc15-46de-9baf-f3faff5c29aa 00:30:32.525 04:27:46 -- host/perf.sh@84 -- # get_lvs_free_mb 3067254b-fc15-46de-9baf-f3faff5c29aa 00:30:32.525 04:27:46 -- common/autotest_common.sh@1343 -- # local lvs_uuid=3067254b-fc15-46de-9baf-f3faff5c29aa 00:30:32.525 04:27:46 -- common/autotest_common.sh@1344 -- # local lvs_info 00:30:32.525 04:27:46 -- common/autotest_common.sh@1345 -- # local fc 00:30:32.525 04:27:46 -- common/autotest_common.sh@1346 -- # local cs 00:30:32.525 04:27:46 -- common/autotest_common.sh@1347 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:30:32.525 04:27:46 -- common/autotest_common.sh@1347 -- # lvs_info='[ 00:30:32.525 { 00:30:32.525 "uuid": "131a201a-d3a9-4598-a62a-fd19d6a7057d", 00:30:32.525 "name": "lvs_0", 00:30:32.525 "base_bdev": "Nvme0n1", 00:30:32.525 "total_data_clusters": 476466, 00:30:32.525 "free_clusters": 471346, 00:30:32.525 "block_size": 512, 00:30:32.525 "cluster_size": 4194304 00:30:32.526 }, 00:30:32.526 { 00:30:32.526 "uuid": "3067254b-fc15-46de-9baf-f3faff5c29aa", 00:30:32.526 "name": "lvs_n_0", 00:30:32.526 "base_bdev": "90ac1a89-a3a5-4fa2-80ac-18973285b645", 00:30:32.526 "total_data_clusters": 5114, 00:30:32.526 "free_clusters": 5114, 00:30:32.526 "block_size": 512, 00:30:32.526 "cluster_size": 4194304 00:30:32.526 } 00:30:32.526 ]' 00:30:32.526 04:27:46 -- common/autotest_common.sh@1348 -- # jq '.[] | select(.uuid=="3067254b-fc15-46de-9baf-f3faff5c29aa") .free_clusters' 00:30:32.526 04:27:47 -- common/autotest_common.sh@1348 -- # fc=5114 00:30:32.526 04:27:47 -- common/autotest_common.sh@1349 -- # jq '.[] | select(.uuid=="3067254b-fc15-46de-9baf-f3faff5c29aa") .cluster_size' 00:30:32.526 04:27:47 -- common/autotest_common.sh@1349 -- # cs=4194304 00:30:32.526 04:27:47 -- common/autotest_common.sh@1352 -- # free_mb=20456 00:30:32.526 04:27:47 -- common/autotest_common.sh@1353 -- # echo 20456 00:30:32.526 20456 00:30:32.526 04:27:47 -- host/perf.sh@85 -- # '[' 20456 -gt 20480 ']' 00:30:32.526 04:27:47 -- host/perf.sh@88 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 3067254b-fc15-46de-9baf-f3faff5c29aa lbd_nest_0 20456 00:30:32.785 04:27:47 -- host/perf.sh@88 -- # lb_nested_guid=9b6a9cf4-5601-405e-befa-3b0aa0cc34db 00:30:32.785 04:27:47 -- host/perf.sh@89 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:30:33.045 04:27:47 -- host/perf.sh@90 -- # for bdev in $lb_nested_guid 00:30:33.046 04:27:47 -- host/perf.sh@91 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 9b6a9cf4-5601-405e-befa-3b0aa0cc34db 00:30:33.046 04:27:47 -- host/perf.sh@93 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:33.306 04:27:47 -- host/perf.sh@95 -- # qd_depth=("1" "32" "128") 00:30:33.306 04:27:47 -- host/perf.sh@96 -- # io_size=("512" "131072") 00:30:33.306 04:27:47 -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:30:33.306 04:27:47 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:30:33.306 04:27:47 -- host/perf.sh@99 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:30:33.306 EAL: No free 2048 kB hugepages reported on node 1 00:30:45.514 Initializing NVMe Controllers 00:30:45.514 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:30:45.514 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:30:45.514 Initialization complete. Launching workers. 00:30:45.514 ======================================================== 00:30:45.514 Latency(us) 00:30:45.514 Device Information : IOPS MiB/s Average min max 00:30:45.514 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 44.70 0.02 22429.94 196.68 47540.73 00:30:45.514 ======================================================== 00:30:45.514 Total : 44.70 0.02 22429.94 196.68 47540.73 00:30:45.514 00:30:45.514 04:27:58 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:30:45.514 04:27:58 -- host/perf.sh@99 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:30:45.514 EAL: No free 2048 kB hugepages reported on node 1 00:30:55.491 Initializing NVMe Controllers 00:30:55.491 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:30:55.491 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:30:55.491 Initialization complete. Launching workers. 00:30:55.491 ======================================================== 00:30:55.491 Latency(us) 00:30:55.491 Device Information : IOPS MiB/s Average min max 00:30:55.491 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 82.70 10.34 12100.73 5013.65 47999.92 00:30:55.491 ======================================================== 00:30:55.491 Total : 82.70 10.34 12100.73 5013.65 47999.92 00:30:55.491 00:30:55.491 04:28:08 -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:30:55.491 04:28:08 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:30:55.491 04:28:08 -- host/perf.sh@99 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:30:55.491 EAL: No free 2048 kB hugepages reported on node 1 00:31:05.467 Initializing NVMe Controllers 00:31:05.467 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:31:05.467 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:31:05.467 Initialization complete. Launching workers. 00:31:05.467 ======================================================== 00:31:05.467 Latency(us) 00:31:05.467 Device Information : IOPS MiB/s Average min max 00:31:05.467 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 9108.20 4.45 3520.92 223.03 48003.35 00:31:05.467 ======================================================== 00:31:05.467 Total : 9108.20 4.45 3520.92 223.03 48003.35 00:31:05.467 00:31:05.467 04:28:19 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:31:05.467 04:28:19 -- host/perf.sh@99 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:31:05.467 EAL: No free 2048 kB hugepages reported on node 1 00:31:15.521 Initializing NVMe Controllers 00:31:15.521 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:31:15.521 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:31:15.521 Initialization complete. Launching workers. 00:31:15.521 ======================================================== 00:31:15.521 Latency(us) 00:31:15.521 Device Information : IOPS MiB/s Average min max 00:31:15.521 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 3253.03 406.63 9845.13 548.91 53879.91 00:31:15.521 ======================================================== 00:31:15.521 Total : 3253.03 406.63 9845.13 548.91 53879.91 00:31:15.521 00:31:15.521 04:28:29 -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:31:15.521 04:28:29 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:31:15.521 04:28:29 -- host/perf.sh@99 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:31:15.521 EAL: No free 2048 kB hugepages reported on node 1 00:31:27.729 Initializing NVMe Controllers 00:31:27.729 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:31:27.729 Controller IO queue size 128, less than required. 00:31:27.729 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:31:27.729 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:31:27.729 Initialization complete. Launching workers. 00:31:27.729 ======================================================== 00:31:27.729 Latency(us) 00:31:27.729 Device Information : IOPS MiB/s Average min max 00:31:27.729 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 15869.41 7.75 8065.93 1538.25 24091.40 00:31:27.729 ======================================================== 00:31:27.729 Total : 15869.41 7.75 8065.93 1538.25 24091.40 00:31:27.729 00:31:27.729 04:28:40 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:31:27.729 04:28:40 -- host/perf.sh@99 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:31:27.729 EAL: No free 2048 kB hugepages reported on node 1 00:31:37.710 Initializing NVMe Controllers 00:31:37.710 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:31:37.710 Controller IO queue size 128, less than required. 00:31:37.710 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:31:37.710 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:31:37.710 Initialization complete. Launching workers. 00:31:37.710 ======================================================== 00:31:37.710 Latency(us) 00:31:37.710 Device Information : IOPS MiB/s Average min max 00:31:37.710 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1179.55 147.44 108695.23 15464.85 224016.46 00:31:37.710 ======================================================== 00:31:37.710 Total : 1179.55 147.44 108695.23 15464.85 224016.46 00:31:37.710 00:31:37.710 04:28:50 -- host/perf.sh@104 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:31:37.710 04:28:50 -- host/perf.sh@105 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 9b6a9cf4-5601-405e-befa-3b0aa0cc34db 00:31:37.710 04:28:51 -- host/perf.sh@106 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_n_0 00:31:37.710 04:28:51 -- host/perf.sh@107 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 90ac1a89-a3a5-4fa2-80ac-18973285b645 00:31:37.710 04:28:51 -- host/perf.sh@108 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_0 00:31:37.710 04:28:51 -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:31:37.710 04:28:51 -- host/perf.sh@114 -- # nvmftestfini 00:31:37.710 04:28:51 -- nvmf/common.sh@476 -- # nvmfcleanup 00:31:37.710 04:28:51 -- nvmf/common.sh@116 -- # sync 00:31:37.710 04:28:51 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:31:37.710 04:28:51 -- nvmf/common.sh@119 -- # set +e 00:31:37.710 04:28:51 -- nvmf/common.sh@120 -- # for i in {1..20} 00:31:37.710 04:28:51 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:31:37.710 rmmod nvme_tcp 00:31:37.710 rmmod nvme_fabrics 00:31:37.710 rmmod nvme_keyring 00:31:37.710 04:28:51 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:31:37.710 04:28:51 -- nvmf/common.sh@123 -- # set -e 00:31:37.710 04:28:51 -- nvmf/common.sh@124 -- # return 0 00:31:37.710 04:28:51 -- nvmf/common.sh@477 -- # '[' -n 4184613 ']' 00:31:37.710 04:28:51 -- nvmf/common.sh@478 -- # killprocess 4184613 00:31:37.710 04:28:51 -- common/autotest_common.sh@926 -- # '[' -z 4184613 ']' 00:31:37.710 04:28:51 -- common/autotest_common.sh@930 -- # kill -0 4184613 00:31:37.710 04:28:51 -- common/autotest_common.sh@931 -- # uname 00:31:37.710 04:28:51 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:31:37.710 04:28:51 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 4184613 00:31:37.710 04:28:51 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:31:37.710 04:28:51 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:31:37.710 04:28:51 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 4184613' 00:31:37.710 killing process with pid 4184613 00:31:37.710 04:28:51 -- common/autotest_common.sh@945 -- # kill 4184613 00:31:37.710 04:28:51 -- common/autotest_common.sh@950 -- # wait 4184613 00:31:40.998 04:28:55 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:31:40.998 04:28:55 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:31:40.998 04:28:55 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:31:40.998 04:28:55 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:31:40.998 04:28:55 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:31:40.998 04:28:55 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:40.998 04:28:55 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:31:40.998 04:28:55 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:42.903 04:28:57 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:31:42.903 00:31:42.903 real 1m42.395s 00:31:42.903 user 6m14.684s 00:31:42.903 sys 0m11.282s 00:31:42.903 04:28:57 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:31:42.903 04:28:57 -- common/autotest_common.sh@10 -- # set +x 00:31:42.903 ************************************ 00:31:42.903 END TEST nvmf_perf 00:31:42.903 ************************************ 00:31:42.903 04:28:57 -- nvmf/nvmf.sh@98 -- # run_test nvmf_fio_host /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:31:42.903 04:28:57 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:31:42.903 04:28:57 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:31:42.903 04:28:57 -- common/autotest_common.sh@10 -- # set +x 00:31:42.903 ************************************ 00:31:42.903 START TEST nvmf_fio_host 00:31:42.903 ************************************ 00:31:42.904 04:28:57 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:31:42.904 * Looking for test storage... 00:31:42.904 * Found test storage at /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/host 00:31:42.904 04:28:57 -- host/fio.sh@9 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/common.sh 00:31:42.904 04:28:57 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:42.904 04:28:57 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:42.904 04:28:57 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:42.904 04:28:57 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:42.904 04:28:57 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:42.904 04:28:57 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:42.904 04:28:57 -- paths/export.sh@5 -- # export PATH 00:31:42.904 04:28:57 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:42.904 04:28:57 -- host/fio.sh@10 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/common.sh 00:31:42.904 04:28:57 -- nvmf/common.sh@7 -- # uname -s 00:31:42.904 04:28:57 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:42.904 04:28:57 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:42.904 04:28:57 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:42.904 04:28:57 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:42.904 04:28:57 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:42.904 04:28:57 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:42.904 04:28:57 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:42.904 04:28:57 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:42.904 04:28:57 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:42.904 04:28:57 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:42.904 04:28:57 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80ef6226-405e-ee11-906e-a4bf01973fda 00:31:42.904 04:28:57 -- nvmf/common.sh@18 -- # NVME_HOSTID=80ef6226-405e-ee11-906e-a4bf01973fda 00:31:42.904 04:28:57 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:42.904 04:28:57 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:42.904 04:28:57 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:31:42.904 04:28:57 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/common.sh 00:31:42.904 04:28:57 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:42.904 04:28:57 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:42.904 04:28:57 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:42.904 04:28:57 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:42.904 04:28:57 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:42.904 04:28:57 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:42.904 04:28:57 -- paths/export.sh@5 -- # export PATH 00:31:42.904 04:28:57 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:42.904 04:28:57 -- nvmf/common.sh@46 -- # : 0 00:31:42.904 04:28:57 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:31:42.904 04:28:57 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:31:42.904 04:28:57 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:31:42.904 04:28:57 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:42.904 04:28:57 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:42.904 04:28:57 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:31:42.904 04:28:57 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:31:42.904 04:28:57 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:31:42.904 04:28:57 -- host/fio.sh@12 -- # nvmftestinit 00:31:42.904 04:28:57 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:31:42.904 04:28:57 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:42.904 04:28:57 -- nvmf/common.sh@436 -- # prepare_net_devs 00:31:42.904 04:28:57 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:31:42.904 04:28:57 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:31:42.904 04:28:57 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:42.904 04:28:57 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:31:42.904 04:28:57 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:42.904 04:28:57 -- nvmf/common.sh@402 -- # [[ phy-fallback != virt ]] 00:31:42.904 04:28:57 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:31:42.904 04:28:57 -- nvmf/common.sh@284 -- # xtrace_disable 00:31:42.904 04:28:57 -- common/autotest_common.sh@10 -- # set +x 00:31:48.176 04:29:02 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:31:48.176 04:29:02 -- nvmf/common.sh@290 -- # pci_devs=() 00:31:48.176 04:29:02 -- nvmf/common.sh@290 -- # local -a pci_devs 00:31:48.176 04:29:02 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:31:48.176 04:29:02 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:31:48.176 04:29:02 -- nvmf/common.sh@292 -- # pci_drivers=() 00:31:48.176 04:29:02 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:31:48.176 04:29:02 -- nvmf/common.sh@294 -- # net_devs=() 00:31:48.176 04:29:02 -- nvmf/common.sh@294 -- # local -ga net_devs 00:31:48.176 04:29:02 -- nvmf/common.sh@295 -- # e810=() 00:31:48.176 04:29:02 -- nvmf/common.sh@295 -- # local -ga e810 00:31:48.176 04:29:02 -- nvmf/common.sh@296 -- # x722=() 00:31:48.176 04:29:02 -- nvmf/common.sh@296 -- # local -ga x722 00:31:48.176 04:29:02 -- nvmf/common.sh@297 -- # mlx=() 00:31:48.176 04:29:02 -- nvmf/common.sh@297 -- # local -ga mlx 00:31:48.176 04:29:02 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:48.176 04:29:02 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:48.176 04:29:02 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:48.176 04:29:02 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:48.176 04:29:02 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:48.176 04:29:02 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:48.176 04:29:02 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:48.176 04:29:02 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:48.176 04:29:02 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:48.176 04:29:02 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:48.176 04:29:02 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:48.176 04:29:02 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:31:48.176 04:29:02 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:31:48.176 04:29:02 -- nvmf/common.sh@326 -- # [[ '' == mlx5 ]] 00:31:48.176 04:29:02 -- nvmf/common.sh@328 -- # [[ '' == e810 ]] 00:31:48.176 04:29:02 -- nvmf/common.sh@330 -- # [[ '' == x722 ]] 00:31:48.176 04:29:02 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:31:48.176 04:29:02 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:31:48.176 04:29:02 -- nvmf/common.sh@340 -- # echo 'Found 0000:27:00.0 (0x8086 - 0x159b)' 00:31:48.176 Found 0000:27:00.0 (0x8086 - 0x159b) 00:31:48.176 04:29:02 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:31:48.176 04:29:02 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:31:48.176 04:29:02 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:48.176 04:29:02 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:48.176 04:29:02 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:31:48.176 04:29:02 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:31:48.176 04:29:02 -- nvmf/common.sh@340 -- # echo 'Found 0000:27:00.1 (0x8086 - 0x159b)' 00:31:48.176 Found 0000:27:00.1 (0x8086 - 0x159b) 00:31:48.176 04:29:02 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:31:48.176 04:29:02 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:31:48.176 04:29:02 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:48.176 04:29:02 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:48.176 04:29:02 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:31:48.176 04:29:02 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:31:48.176 04:29:02 -- nvmf/common.sh@371 -- # [[ '' == e810 ]] 00:31:48.176 04:29:02 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:31:48.176 04:29:02 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:48.176 04:29:02 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:31:48.176 04:29:02 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:48.176 04:29:02 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:27:00.0: cvl_0_0' 00:31:48.176 Found net devices under 0000:27:00.0: cvl_0_0 00:31:48.176 04:29:02 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:31:48.176 04:29:02 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:31:48.176 04:29:02 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:48.176 04:29:02 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:31:48.176 04:29:02 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:48.176 04:29:02 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:27:00.1: cvl_0_1' 00:31:48.176 Found net devices under 0000:27:00.1: cvl_0_1 00:31:48.176 04:29:02 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:31:48.176 04:29:02 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:31:48.176 04:29:02 -- nvmf/common.sh@402 -- # is_hw=yes 00:31:48.176 04:29:02 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:31:48.176 04:29:02 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:31:48.176 04:29:02 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:31:48.176 04:29:02 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:48.176 04:29:02 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:48.176 04:29:02 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:48.176 04:29:02 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:31:48.176 04:29:02 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:48.176 04:29:02 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:48.176 04:29:02 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:31:48.176 04:29:02 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:48.176 04:29:02 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:48.176 04:29:02 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:31:48.176 04:29:02 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:31:48.176 04:29:02 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:31:48.176 04:29:02 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:48.176 04:29:02 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:48.176 04:29:02 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:48.176 04:29:02 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:31:48.176 04:29:02 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:48.176 04:29:02 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:48.176 04:29:02 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:48.176 04:29:02 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:31:48.176 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:48.176 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.284 ms 00:31:48.176 00:31:48.176 --- 10.0.0.2 ping statistics --- 00:31:48.176 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:48.176 rtt min/avg/max/mdev = 0.284/0.284/0.284/0.000 ms 00:31:48.176 04:29:02 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:48.176 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:48.176 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.251 ms 00:31:48.176 00:31:48.176 --- 10.0.0.1 ping statistics --- 00:31:48.176 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:48.176 rtt min/avg/max/mdev = 0.251/0.251/0.251/0.000 ms 00:31:48.176 04:29:02 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:48.176 04:29:02 -- nvmf/common.sh@410 -- # return 0 00:31:48.176 04:29:02 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:31:48.176 04:29:02 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:48.176 04:29:02 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:31:48.176 04:29:02 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:31:48.176 04:29:02 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:48.176 04:29:02 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:31:48.176 04:29:02 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:31:48.176 04:29:02 -- host/fio.sh@14 -- # [[ y != y ]] 00:31:48.176 04:29:02 -- host/fio.sh@19 -- # timing_enter start_nvmf_tgt 00:31:48.176 04:29:02 -- common/autotest_common.sh@712 -- # xtrace_disable 00:31:48.176 04:29:02 -- common/autotest_common.sh@10 -- # set +x 00:31:48.438 04:29:02 -- host/fio.sh@22 -- # nvmfpid=13372 00:31:48.438 04:29:02 -- host/fio.sh@24 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:31:48.438 04:29:02 -- host/fio.sh@26 -- # waitforlisten 13372 00:31:48.438 04:29:02 -- common/autotest_common.sh@819 -- # '[' -z 13372 ']' 00:31:48.438 04:29:02 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:48.438 04:29:02 -- common/autotest_common.sh@824 -- # local max_retries=100 00:31:48.438 04:29:02 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:48.438 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:48.438 04:29:02 -- common/autotest_common.sh@828 -- # xtrace_disable 00:31:48.438 04:29:02 -- common/autotest_common.sh@10 -- # set +x 00:31:48.438 04:29:02 -- host/fio.sh@21 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:31:48.438 [2024-05-14 04:29:02.825585] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:31:48.438 [2024-05-14 04:29:02.825660] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:48.438 EAL: No free 2048 kB hugepages reported on node 1 00:31:48.438 [2024-05-14 04:29:02.921285] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:31:48.438 [2024-05-14 04:29:03.017613] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:31:48.438 [2024-05-14 04:29:03.017786] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:48.438 [2024-05-14 04:29:03.017799] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:48.438 [2024-05-14 04:29:03.017808] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:48.438 [2024-05-14 04:29:03.017875] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:31:48.438 [2024-05-14 04:29:03.017982] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:31:48.438 [2024-05-14 04:29:03.018085] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:31:48.438 [2024-05-14 04:29:03.018096] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:31:49.004 04:29:03 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:31:49.004 04:29:03 -- common/autotest_common.sh@852 -- # return 0 00:31:49.004 04:29:03 -- host/fio.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:31:49.004 04:29:03 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:49.004 04:29:03 -- common/autotest_common.sh@10 -- # set +x 00:31:49.004 [2024-05-14 04:29:03.562886] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:49.004 04:29:03 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:49.004 04:29:03 -- host/fio.sh@28 -- # timing_exit start_nvmf_tgt 00:31:49.004 04:29:03 -- common/autotest_common.sh@718 -- # xtrace_disable 00:31:49.004 04:29:03 -- common/autotest_common.sh@10 -- # set +x 00:31:49.262 04:29:03 -- host/fio.sh@30 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:31:49.262 04:29:03 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:49.262 04:29:03 -- common/autotest_common.sh@10 -- # set +x 00:31:49.262 Malloc1 00:31:49.262 04:29:03 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:49.262 04:29:03 -- host/fio.sh@31 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:31:49.262 04:29:03 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:49.262 04:29:03 -- common/autotest_common.sh@10 -- # set +x 00:31:49.262 04:29:03 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:49.262 04:29:03 -- host/fio.sh@32 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:31:49.262 04:29:03 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:49.262 04:29:03 -- common/autotest_common.sh@10 -- # set +x 00:31:49.262 04:29:03 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:49.262 04:29:03 -- host/fio.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:49.262 04:29:03 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:49.262 04:29:03 -- common/autotest_common.sh@10 -- # set +x 00:31:49.262 [2024-05-14 04:29:03.667337] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:49.262 04:29:03 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:49.262 04:29:03 -- host/fio.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:31:49.262 04:29:03 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:49.262 04:29:03 -- common/autotest_common.sh@10 -- # set +x 00:31:49.262 04:29:03 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:49.262 04:29:03 -- host/fio.sh@36 -- # PLUGIN_DIR=/var/jenkins/workspace/dsa-phy-autotest/spdk/app/fio/nvme 00:31:49.262 04:29:03 -- host/fio.sh@39 -- # fio_nvme /var/jenkins/workspace/dsa-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:31:49.262 04:29:03 -- common/autotest_common.sh@1339 -- # fio_plugin /var/jenkins/workspace/dsa-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/dsa-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:31:49.262 04:29:03 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:31:49.262 04:29:03 -- common/autotest_common.sh@1318 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:31:49.262 04:29:03 -- common/autotest_common.sh@1318 -- # local sanitizers 00:31:49.262 04:29:03 -- common/autotest_common.sh@1319 -- # local plugin=/var/jenkins/workspace/dsa-phy-autotest/spdk/build/fio/spdk_nvme 00:31:49.262 04:29:03 -- common/autotest_common.sh@1320 -- # shift 00:31:49.262 04:29:03 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:31:49.262 04:29:03 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:31:49.262 04:29:03 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/dsa-phy-autotest/spdk/build/fio/spdk_nvme 00:31:49.262 04:29:03 -- common/autotest_common.sh@1324 -- # grep libasan 00:31:49.262 04:29:03 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:31:49.262 04:29:03 -- common/autotest_common.sh@1324 -- # asan_lib=/usr/lib64/libasan.so.8 00:31:49.262 04:29:03 -- common/autotest_common.sh@1325 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:31:49.262 04:29:03 -- common/autotest_common.sh@1326 -- # break 00:31:49.262 04:29:03 -- common/autotest_common.sh@1331 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /var/jenkins/workspace/dsa-phy-autotest/spdk/build/fio/spdk_nvme' 00:31:49.262 04:29:03 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio /var/jenkins/workspace/dsa-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:31:49.825 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:31:49.825 fio-3.35 00:31:49.825 Starting 1 thread 00:31:49.825 EAL: No free 2048 kB hugepages reported on node 1 00:31:52.421 00:31:52.421 test: (groupid=0, jobs=1): err= 0: pid=13844: Tue May 14 04:29:06 2024 00:31:52.421 read: IOPS=13.0k, BW=50.6MiB/s (53.1MB/s)(102MiB/2005msec) 00:31:52.421 slat (nsec): min=1562, max=131410, avg=2543.54, stdev=1277.13 00:31:52.421 clat (usec): min=2804, max=8988, avg=5421.55, stdev=400.66 00:31:52.421 lat (usec): min=2827, max=8989, avg=5424.10, stdev=400.60 00:31:52.421 clat percentiles (usec): 00:31:52.421 | 1.00th=[ 4555], 5.00th=[ 4817], 10.00th=[ 4948], 20.00th=[ 5080], 00:31:52.421 | 30.00th=[ 5211], 40.00th=[ 5342], 50.00th=[ 5407], 60.00th=[ 5473], 00:31:52.421 | 70.00th=[ 5604], 80.00th=[ 5735], 90.00th=[ 5866], 95.00th=[ 6063], 00:31:52.421 | 99.00th=[ 6521], 99.50th=[ 6915], 99.90th=[ 7439], 99.95th=[ 7963], 00:31:52.421 | 99.99th=[ 8848] 00:31:52.421 bw ( KiB/s): min=50360, max=52800, per=99.99%, avg=51842.00, stdev=1041.57, samples=4 00:31:52.421 iops : min=12590, max=13200, avg=12960.50, stdev=260.39, samples=4 00:31:52.421 write: IOPS=12.9k, BW=50.6MiB/s (53.0MB/s)(101MiB/2005msec); 0 zone resets 00:31:52.421 slat (nsec): min=1620, max=123808, avg=2660.18, stdev=1027.68 00:31:52.421 clat (usec): min=1445, max=8408, avg=4394.82, stdev=342.12 00:31:52.421 lat (usec): min=1458, max=8409, avg=4397.48, stdev=342.12 00:31:52.421 clat percentiles (usec): 00:31:52.421 | 1.00th=[ 3654], 5.00th=[ 3916], 10.00th=[ 4015], 20.00th=[ 4146], 00:31:52.421 | 30.00th=[ 4228], 40.00th=[ 4293], 50.00th=[ 4359], 60.00th=[ 4424], 00:31:52.421 | 70.00th=[ 4555], 80.00th=[ 4621], 90.00th=[ 4752], 95.00th=[ 4948], 00:31:52.421 | 99.00th=[ 5276], 99.50th=[ 5538], 99.90th=[ 6783], 99.95th=[ 7767], 00:31:52.421 | 99.99th=[ 8356] 00:31:52.421 bw ( KiB/s): min=50896, max=52416, per=100.00%, avg=51796.00, stdev=694.93, samples=4 00:31:52.421 iops : min=12724, max=13104, avg=12949.00, stdev=173.73, samples=4 00:31:52.421 lat (msec) : 2=0.02%, 4=4.71%, 10=95.28% 00:31:52.421 cpu : usr=84.18%, sys=15.42%, ctx=4, majf=0, minf=1526 00:31:52.421 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:31:52.421 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:52.421 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:31:52.421 issued rwts: total=25988,25958,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:52.421 latency : target=0, window=0, percentile=100.00%, depth=128 00:31:52.421 00:31:52.421 Run status group 0 (all jobs): 00:31:52.421 READ: bw=50.6MiB/s (53.1MB/s), 50.6MiB/s-50.6MiB/s (53.1MB/s-53.1MB/s), io=102MiB (106MB), run=2005-2005msec 00:31:52.421 WRITE: bw=50.6MiB/s (53.0MB/s), 50.6MiB/s-50.6MiB/s (53.0MB/s-53.0MB/s), io=101MiB (106MB), run=2005-2005msec 00:31:52.421 ----------------------------------------------------- 00:31:52.421 Suppressions used: 00:31:52.421 count bytes template 00:31:52.421 1 57 /usr/src/fio/parse.c 00:31:52.421 1 8 libtcmalloc_minimal.so 00:31:52.421 ----------------------------------------------------- 00:31:52.421 00:31:52.421 04:29:06 -- host/fio.sh@43 -- # fio_nvme /var/jenkins/workspace/dsa-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:31:52.421 04:29:06 -- common/autotest_common.sh@1339 -- # fio_plugin /var/jenkins/workspace/dsa-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/dsa-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:31:52.421 04:29:06 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:31:52.421 04:29:06 -- common/autotest_common.sh@1318 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:31:52.421 04:29:06 -- common/autotest_common.sh@1318 -- # local sanitizers 00:31:52.421 04:29:06 -- common/autotest_common.sh@1319 -- # local plugin=/var/jenkins/workspace/dsa-phy-autotest/spdk/build/fio/spdk_nvme 00:31:52.421 04:29:06 -- common/autotest_common.sh@1320 -- # shift 00:31:52.421 04:29:06 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:31:52.421 04:29:06 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:31:52.421 04:29:06 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/dsa-phy-autotest/spdk/build/fio/spdk_nvme 00:31:52.421 04:29:06 -- common/autotest_common.sh@1324 -- # grep libasan 00:31:52.421 04:29:06 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:31:52.421 04:29:06 -- common/autotest_common.sh@1324 -- # asan_lib=/usr/lib64/libasan.so.8 00:31:52.421 04:29:06 -- common/autotest_common.sh@1325 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:31:52.421 04:29:06 -- common/autotest_common.sh@1326 -- # break 00:31:52.421 04:29:06 -- common/autotest_common.sh@1331 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /var/jenkins/workspace/dsa-phy-autotest/spdk/build/fio/spdk_nvme' 00:31:52.421 04:29:06 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio /var/jenkins/workspace/dsa-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:31:52.679 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:31:52.679 fio-3.35 00:31:52.679 Starting 1 thread 00:31:52.936 EAL: No free 2048 kB hugepages reported on node 1 00:31:55.465 00:31:55.465 test: (groupid=0, jobs=1): err= 0: pid=14583: Tue May 14 04:29:09 2024 00:31:55.465 read: IOPS=8640, BW=135MiB/s (142MB/s)(271MiB/2004msec) 00:31:55.465 slat (usec): min=2, max=144, avg= 4.01, stdev= 1.90 00:31:55.465 clat (usec): min=884, max=58864, avg=9085.46, stdev=4733.60 00:31:55.465 lat (usec): min=887, max=58869, avg=9089.46, stdev=4734.12 00:31:55.465 clat percentiles (usec): 00:31:55.465 | 1.00th=[ 3785], 5.00th=[ 4621], 10.00th=[ 5276], 20.00th=[ 6194], 00:31:55.465 | 30.00th=[ 6915], 40.00th=[ 7701], 50.00th=[ 8291], 60.00th=[ 9110], 00:31:55.465 | 70.00th=[10159], 80.00th=[11338], 90.00th=[13173], 95.00th=[15008], 00:31:55.465 | 99.00th=[16909], 99.50th=[46924], 99.90th=[57410], 99.95th=[58459], 00:31:55.465 | 99.99th=[58983] 00:31:55.465 bw ( KiB/s): min=44448, max=91904, per=50.45%, avg=69744.00, stdev=20171.62, samples=4 00:31:55.465 iops : min= 2778, max= 5744, avg=4359.00, stdev=1260.73, samples=4 00:31:55.465 write: IOPS=5086, BW=79.5MiB/s (83.3MB/s)(142MiB/1784msec); 0 zone resets 00:31:55.465 slat (usec): min=28, max=197, avg=41.63, stdev=10.61 00:31:55.465 clat (usec): min=1827, max=18468, avg=9978.11, stdev=2570.91 00:31:55.465 lat (usec): min=1856, max=18519, avg=10019.75, stdev=2578.93 00:31:55.465 clat percentiles (usec): 00:31:55.465 | 1.00th=[ 5669], 5.00th=[ 6456], 10.00th=[ 6915], 20.00th=[ 7635], 00:31:55.465 | 30.00th=[ 8291], 40.00th=[ 8848], 50.00th=[ 9503], 60.00th=[10421], 00:31:55.465 | 70.00th=[11338], 80.00th=[12256], 90.00th=[13566], 95.00th=[14615], 00:31:55.465 | 99.00th=[16581], 99.50th=[16909], 99.90th=[17695], 99.95th=[17695], 00:31:55.465 | 99.99th=[18482] 00:31:55.465 bw ( KiB/s): min=45184, max=96096, per=89.20%, avg=72592.00, stdev=21431.92, samples=4 00:31:55.465 iops : min= 2824, max= 6006, avg=4537.00, stdev=1339.49, samples=4 00:31:55.465 lat (usec) : 1000=0.01% 00:31:55.465 lat (msec) : 2=0.02%, 4=1.19%, 10=62.45%, 20=35.85%, 50=0.18% 00:31:55.465 lat (msec) : 100=0.30% 00:31:55.465 cpu : usr=86.13%, sys=13.37%, ctx=9, majf=0, minf=2257 00:31:55.465 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.6% 00:31:55.465 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:55.465 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:31:55.465 issued rwts: total=17316,9074,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:55.465 latency : target=0, window=0, percentile=100.00%, depth=128 00:31:55.465 00:31:55.465 Run status group 0 (all jobs): 00:31:55.465 READ: bw=135MiB/s (142MB/s), 135MiB/s-135MiB/s (142MB/s-142MB/s), io=271MiB (284MB), run=2004-2004msec 00:31:55.465 WRITE: bw=79.5MiB/s (83.3MB/s), 79.5MiB/s-79.5MiB/s (83.3MB/s-83.3MB/s), io=142MiB (149MB), run=1784-1784msec 00:31:55.465 ----------------------------------------------------- 00:31:55.465 Suppressions used: 00:31:55.465 count bytes template 00:31:55.465 1 57 /usr/src/fio/parse.c 00:31:55.465 476 45696 /usr/src/fio/iolog.c 00:31:55.465 1 8 libtcmalloc_minimal.so 00:31:55.465 ----------------------------------------------------- 00:31:55.465 00:31:55.465 04:29:09 -- host/fio.sh@45 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:31:55.465 04:29:09 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:55.465 04:29:09 -- common/autotest_common.sh@10 -- # set +x 00:31:55.465 04:29:09 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:55.465 04:29:09 -- host/fio.sh@47 -- # '[' 1 -eq 1 ']' 00:31:55.465 04:29:09 -- host/fio.sh@49 -- # bdfs=($(get_nvme_bdfs)) 00:31:55.465 04:29:09 -- host/fio.sh@49 -- # get_nvme_bdfs 00:31:55.465 04:29:09 -- common/autotest_common.sh@1498 -- # bdfs=() 00:31:55.465 04:29:09 -- common/autotest_common.sh@1498 -- # local bdfs 00:31:55.465 04:29:09 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:31:55.465 04:29:09 -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/gen_nvme.sh 00:31:55.465 04:29:09 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:31:55.465 04:29:09 -- common/autotest_common.sh@1500 -- # (( 2 == 0 )) 00:31:55.465 04:29:09 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:c9:00.0 0000:ca:00.0 00:31:55.465 04:29:09 -- host/fio.sh@50 -- # rpc_cmd bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:c9:00.0 -i 10.0.0.2 00:31:55.465 04:29:09 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:55.465 04:29:09 -- common/autotest_common.sh@10 -- # set +x 00:31:58.747 Nvme0n1 00:31:58.747 04:29:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:58.747 04:29:12 -- host/fio.sh@51 -- # rpc_cmd bdev_lvol_create_lvstore -c 1073741824 Nvme0n1 lvs_0 00:31:58.747 04:29:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:58.747 04:29:12 -- common/autotest_common.sh@10 -- # set +x 00:32:04.012 04:29:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:04.012 04:29:18 -- host/fio.sh@51 -- # ls_guid=6c3fe989-2bd1-4586-89f7-a9881b377f69 00:32:04.012 04:29:18 -- host/fio.sh@52 -- # get_lvs_free_mb 6c3fe989-2bd1-4586-89f7-a9881b377f69 00:32:04.012 04:29:18 -- common/autotest_common.sh@1343 -- # local lvs_uuid=6c3fe989-2bd1-4586-89f7-a9881b377f69 00:32:04.012 04:29:18 -- common/autotest_common.sh@1344 -- # local lvs_info 00:32:04.012 04:29:18 -- common/autotest_common.sh@1345 -- # local fc 00:32:04.013 04:29:18 -- common/autotest_common.sh@1346 -- # local cs 00:32:04.013 04:29:18 -- common/autotest_common.sh@1347 -- # rpc_cmd bdev_lvol_get_lvstores 00:32:04.013 04:29:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:04.013 04:29:18 -- common/autotest_common.sh@10 -- # set +x 00:32:04.013 04:29:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:04.013 04:29:18 -- common/autotest_common.sh@1347 -- # lvs_info='[ 00:32:04.013 { 00:32:04.013 "uuid": "6c3fe989-2bd1-4586-89f7-a9881b377f69", 00:32:04.013 "name": "lvs_0", 00:32:04.013 "base_bdev": "Nvme0n1", 00:32:04.013 "total_data_clusters": 1862, 00:32:04.013 "free_clusters": 1862, 00:32:04.013 "block_size": 512, 00:32:04.013 "cluster_size": 1073741824 00:32:04.013 } 00:32:04.013 ]' 00:32:04.013 04:29:18 -- common/autotest_common.sh@1348 -- # jq '.[] | select(.uuid=="6c3fe989-2bd1-4586-89f7-a9881b377f69") .free_clusters' 00:32:04.013 04:29:18 -- common/autotest_common.sh@1348 -- # fc=1862 00:32:04.013 04:29:18 -- common/autotest_common.sh@1349 -- # jq '.[] | select(.uuid=="6c3fe989-2bd1-4586-89f7-a9881b377f69") .cluster_size' 00:32:04.013 04:29:18 -- common/autotest_common.sh@1349 -- # cs=1073741824 00:32:04.013 04:29:18 -- common/autotest_common.sh@1352 -- # free_mb=1906688 00:32:04.013 04:29:18 -- common/autotest_common.sh@1353 -- # echo 1906688 00:32:04.013 1906688 00:32:04.013 04:29:18 -- host/fio.sh@53 -- # rpc_cmd bdev_lvol_create -l lvs_0 lbd_0 1906688 00:32:04.013 04:29:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:04.013 04:29:18 -- common/autotest_common.sh@10 -- # set +x 00:32:04.013 d9294187-292d-4622-bf4f-ac529ddf2cf3 00:32:04.013 04:29:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:04.013 04:29:18 -- host/fio.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000001 00:32:04.013 04:29:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:04.013 04:29:18 -- common/autotest_common.sh@10 -- # set +x 00:32:04.013 04:29:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:04.013 04:29:18 -- host/fio.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 lvs_0/lbd_0 00:32:04.013 04:29:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:04.013 04:29:18 -- common/autotest_common.sh@10 -- # set +x 00:32:04.013 04:29:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:04.013 04:29:18 -- host/fio.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:32:04.013 04:29:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:04.013 04:29:18 -- common/autotest_common.sh@10 -- # set +x 00:32:04.013 04:29:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:04.013 04:29:18 -- host/fio.sh@57 -- # fio_nvme /var/jenkins/workspace/dsa-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:32:04.013 04:29:18 -- common/autotest_common.sh@1339 -- # fio_plugin /var/jenkins/workspace/dsa-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/dsa-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:32:04.013 04:29:18 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:32:04.013 04:29:18 -- common/autotest_common.sh@1318 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:32:04.013 04:29:18 -- common/autotest_common.sh@1318 -- # local sanitizers 00:32:04.013 04:29:18 -- common/autotest_common.sh@1319 -- # local plugin=/var/jenkins/workspace/dsa-phy-autotest/spdk/build/fio/spdk_nvme 00:32:04.013 04:29:18 -- common/autotest_common.sh@1320 -- # shift 00:32:04.013 04:29:18 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:32:04.013 04:29:18 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:32:04.013 04:29:18 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/dsa-phy-autotest/spdk/build/fio/spdk_nvme 00:32:04.013 04:29:18 -- common/autotest_common.sh@1324 -- # grep libasan 00:32:04.013 04:29:18 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:32:04.013 04:29:18 -- common/autotest_common.sh@1324 -- # asan_lib=/usr/lib64/libasan.so.8 00:32:04.013 04:29:18 -- common/autotest_common.sh@1325 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:32:04.013 04:29:18 -- common/autotest_common.sh@1326 -- # break 00:32:04.013 04:29:18 -- common/autotest_common.sh@1331 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /var/jenkins/workspace/dsa-phy-autotest/spdk/build/fio/spdk_nvme' 00:32:04.013 04:29:18 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio /var/jenkins/workspace/dsa-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:32:04.584 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:32:04.584 fio-3.35 00:32:04.584 Starting 1 thread 00:32:04.584 EAL: No free 2048 kB hugepages reported on node 1 00:32:07.117 00:32:07.117 test: (groupid=0, jobs=1): err= 0: pid=17126: Tue May 14 04:29:21 2024 00:32:07.117 read: IOPS=7433, BW=29.0MiB/s (30.4MB/s)(58.2MiB/2006msec) 00:32:07.117 slat (nsec): min=1581, max=535209, avg=1930.12, stdev=4529.99 00:32:07.117 clat (usec): min=508, max=480180, avg=9343.19, stdev=31057.78 00:32:07.117 lat (usec): min=510, max=480186, avg=9345.12, stdev=31057.93 00:32:07.117 clat percentiles (msec): 00:32:07.117 | 1.00th=[ 6], 5.00th=[ 7], 10.00th=[ 7], 20.00th=[ 7], 00:32:07.117 | 30.00th=[ 7], 40.00th=[ 8], 50.00th=[ 8], 60.00th=[ 8], 00:32:07.117 | 70.00th=[ 8], 80.00th=[ 8], 90.00th=[ 9], 95.00th=[ 9], 00:32:07.117 | 99.00th=[ 11], 99.50th=[ 12], 99.90th=[ 481], 99.95th=[ 481], 00:32:07.117 | 99.99th=[ 481] 00:32:07.117 bw ( KiB/s): min= 1552, max=39856, per=99.88%, avg=29700.00, stdev=18777.67, samples=4 00:32:07.117 iops : min= 388, max= 9964, avg=7425.00, stdev=4694.42, samples=4 00:32:07.117 write: IOPS=7411, BW=29.0MiB/s (30.4MB/s)(58.1MiB/2006msec); 0 zone resets 00:32:07.117 slat (nsec): min=1665, max=92815, avg=1998.66, stdev=835.70 00:32:07.117 clat (usec): min=355, max=478095, avg=7780.33, stdev=30242.29 00:32:07.117 lat (usec): min=357, max=478099, avg=7782.33, stdev=30242.44 00:32:07.117 clat percentiles (msec): 00:32:07.117 | 1.00th=[ 5], 5.00th=[ 6], 10.00th=[ 6], 20.00th=[ 6], 00:32:07.117 | 30.00th=[ 6], 40.00th=[ 6], 50.00th=[ 6], 60.00th=[ 6], 00:32:07.117 | 70.00th=[ 7], 80.00th=[ 7], 90.00th=[ 7], 95.00th=[ 7], 00:32:07.117 | 99.00th=[ 9], 99.50th=[ 10], 99.90th=[ 477], 99.95th=[ 477], 00:32:07.117 | 99.99th=[ 477] 00:32:07.117 bw ( KiB/s): min= 1688, max=39296, per=99.86%, avg=29606.00, stdev=18614.05, samples=4 00:32:07.117 iops : min= 422, max= 9824, avg=7401.50, stdev=4653.51, samples=4 00:32:07.117 lat (usec) : 500=0.01%, 750=0.01%, 1000=0.01% 00:32:07.117 lat (msec) : 2=0.06%, 4=0.31%, 10=98.86%, 20=0.32%, 50=0.01% 00:32:07.117 lat (msec) : 500=0.43% 00:32:07.117 cpu : usr=86.58%, sys=13.07%, ctx=4, majf=0, minf=1521 00:32:07.117 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:32:07.117 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:07.117 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:32:07.117 issued rwts: total=14912,14868,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:07.117 latency : target=0, window=0, percentile=100.00%, depth=128 00:32:07.117 00:32:07.117 Run status group 0 (all jobs): 00:32:07.117 READ: bw=29.0MiB/s (30.4MB/s), 29.0MiB/s-29.0MiB/s (30.4MB/s-30.4MB/s), io=58.2MiB (61.1MB), run=2006-2006msec 00:32:07.117 WRITE: bw=29.0MiB/s (30.4MB/s), 29.0MiB/s-29.0MiB/s (30.4MB/s-30.4MB/s), io=58.1MiB (60.9MB), run=2006-2006msec 00:32:07.117 ----------------------------------------------------- 00:32:07.117 Suppressions used: 00:32:07.117 count bytes template 00:32:07.117 1 58 /usr/src/fio/parse.c 00:32:07.117 1 8 libtcmalloc_minimal.so 00:32:07.117 ----------------------------------------------------- 00:32:07.117 00:32:07.117 04:29:21 -- host/fio.sh@59 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:32:07.117 04:29:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:07.117 04:29:21 -- common/autotest_common.sh@10 -- # set +x 00:32:07.117 04:29:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:07.117 04:29:21 -- host/fio.sh@62 -- # rpc_cmd bdev_lvol_create_lvstore --clear-method none lvs_0/lbd_0 lvs_n_0 00:32:07.117 04:29:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:07.117 04:29:21 -- common/autotest_common.sh@10 -- # set +x 00:32:08.051 04:29:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:08.051 04:29:22 -- host/fio.sh@62 -- # ls_nested_guid=f88819ce-36af-4826-b0c5-04c3b1f84709 00:32:08.051 04:29:22 -- host/fio.sh@63 -- # get_lvs_free_mb f88819ce-36af-4826-b0c5-04c3b1f84709 00:32:08.051 04:29:22 -- common/autotest_common.sh@1343 -- # local lvs_uuid=f88819ce-36af-4826-b0c5-04c3b1f84709 00:32:08.051 04:29:22 -- common/autotest_common.sh@1344 -- # local lvs_info 00:32:08.051 04:29:22 -- common/autotest_common.sh@1345 -- # local fc 00:32:08.051 04:29:22 -- common/autotest_common.sh@1346 -- # local cs 00:32:08.051 04:29:22 -- common/autotest_common.sh@1347 -- # rpc_cmd bdev_lvol_get_lvstores 00:32:08.051 04:29:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:08.051 04:29:22 -- common/autotest_common.sh@10 -- # set +x 00:32:08.051 04:29:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:08.051 04:29:22 -- common/autotest_common.sh@1347 -- # lvs_info='[ 00:32:08.051 { 00:32:08.051 "uuid": "6c3fe989-2bd1-4586-89f7-a9881b377f69", 00:32:08.051 "name": "lvs_0", 00:32:08.051 "base_bdev": "Nvme0n1", 00:32:08.051 "total_data_clusters": 1862, 00:32:08.051 "free_clusters": 0, 00:32:08.051 "block_size": 512, 00:32:08.051 "cluster_size": 1073741824 00:32:08.051 }, 00:32:08.051 { 00:32:08.051 "uuid": "f88819ce-36af-4826-b0c5-04c3b1f84709", 00:32:08.051 "name": "lvs_n_0", 00:32:08.051 "base_bdev": "d9294187-292d-4622-bf4f-ac529ddf2cf3", 00:32:08.051 "total_data_clusters": 476206, 00:32:08.051 "free_clusters": 476206, 00:32:08.051 "block_size": 512, 00:32:08.051 "cluster_size": 4194304 00:32:08.052 } 00:32:08.052 ]' 00:32:08.052 04:29:22 -- common/autotest_common.sh@1348 -- # jq '.[] | select(.uuid=="f88819ce-36af-4826-b0c5-04c3b1f84709") .free_clusters' 00:32:08.052 04:29:22 -- common/autotest_common.sh@1348 -- # fc=476206 00:32:08.052 04:29:22 -- common/autotest_common.sh@1349 -- # jq '.[] | select(.uuid=="f88819ce-36af-4826-b0c5-04c3b1f84709") .cluster_size' 00:32:08.052 04:29:22 -- common/autotest_common.sh@1349 -- # cs=4194304 00:32:08.052 04:29:22 -- common/autotest_common.sh@1352 -- # free_mb=1904824 00:32:08.052 04:29:22 -- common/autotest_common.sh@1353 -- # echo 1904824 00:32:08.052 1904824 00:32:08.052 04:29:22 -- host/fio.sh@64 -- # rpc_cmd bdev_lvol_create -l lvs_n_0 lbd_nest_0 1904824 00:32:08.052 04:29:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:08.052 04:29:22 -- common/autotest_common.sh@10 -- # set +x 00:32:09.955 d83da237-a915-45c0-95b3-e3429541abfa 00:32:09.955 04:29:24 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:09.955 04:29:24 -- host/fio.sh@65 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000001 00:32:09.955 04:29:24 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:09.955 04:29:24 -- common/autotest_common.sh@10 -- # set +x 00:32:09.955 04:29:24 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:09.956 04:29:24 -- host/fio.sh@66 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 lvs_n_0/lbd_nest_0 00:32:09.956 04:29:24 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:09.956 04:29:24 -- common/autotest_common.sh@10 -- # set +x 00:32:09.956 04:29:24 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:09.956 04:29:24 -- host/fio.sh@67 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:32:09.956 04:29:24 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:09.956 04:29:24 -- common/autotest_common.sh@10 -- # set +x 00:32:09.956 04:29:24 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:09.956 04:29:24 -- host/fio.sh@68 -- # fio_nvme /var/jenkins/workspace/dsa-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:32:09.956 04:29:24 -- common/autotest_common.sh@1339 -- # fio_plugin /var/jenkins/workspace/dsa-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/dsa-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:32:09.956 04:29:24 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:32:09.956 04:29:24 -- common/autotest_common.sh@1318 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:32:09.956 04:29:24 -- common/autotest_common.sh@1318 -- # local sanitizers 00:32:09.956 04:29:24 -- common/autotest_common.sh@1319 -- # local plugin=/var/jenkins/workspace/dsa-phy-autotest/spdk/build/fio/spdk_nvme 00:32:09.956 04:29:24 -- common/autotest_common.sh@1320 -- # shift 00:32:09.956 04:29:24 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:32:09.956 04:29:24 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:32:09.956 04:29:24 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/dsa-phy-autotest/spdk/build/fio/spdk_nvme 00:32:09.956 04:29:24 -- common/autotest_common.sh@1324 -- # grep libasan 00:32:09.956 04:29:24 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:32:09.956 04:29:24 -- common/autotest_common.sh@1324 -- # asan_lib=/usr/lib64/libasan.so.8 00:32:09.956 04:29:24 -- common/autotest_common.sh@1325 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:32:09.956 04:29:24 -- common/autotest_common.sh@1326 -- # break 00:32:09.956 04:29:24 -- common/autotest_common.sh@1331 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /var/jenkins/workspace/dsa-phy-autotest/spdk/build/fio/spdk_nvme' 00:32:09.956 04:29:24 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio /var/jenkins/workspace/dsa-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:32:10.214 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:32:10.214 fio-3.35 00:32:10.214 Starting 1 thread 00:32:10.214 EAL: No free 2048 kB hugepages reported on node 1 00:32:12.741 00:32:12.741 test: (groupid=0, jobs=1): err= 0: pid=18470: Tue May 14 04:29:27 2024 00:32:12.741 read: IOPS=8691, BW=33.9MiB/s (35.6MB/s)(68.1MiB/2007msec) 00:32:12.741 slat (nsec): min=1591, max=93337, avg=1851.44, stdev=963.38 00:32:12.741 clat (usec): min=3779, max=12979, avg=8139.21, stdev=653.24 00:32:12.741 lat (usec): min=3783, max=12981, avg=8141.06, stdev=653.19 00:32:12.741 clat percentiles (usec): 00:32:12.741 | 1.00th=[ 6652], 5.00th=[ 7111], 10.00th=[ 7373], 20.00th=[ 7635], 00:32:12.741 | 30.00th=[ 7832], 40.00th=[ 7963], 50.00th=[ 8160], 60.00th=[ 8291], 00:32:12.741 | 70.00th=[ 8455], 80.00th=[ 8717], 90.00th=[ 8979], 95.00th=[ 9110], 00:32:12.741 | 99.00th=[ 9634], 99.50th=[ 9896], 99.90th=[10945], 99.95th=[11863], 00:32:12.741 | 99.99th=[12911] 00:32:12.741 bw ( KiB/s): min=33216, max=35312, per=99.97%, avg=34754.00, stdev=1025.84, samples=4 00:32:12.741 iops : min= 8304, max= 8828, avg=8688.50, stdev=256.46, samples=4 00:32:12.741 write: IOPS=8685, BW=33.9MiB/s (35.6MB/s)(68.1MiB/2007msec); 0 zone resets 00:32:12.741 slat (nsec): min=1654, max=87660, avg=1953.51, stdev=905.59 00:32:12.741 clat (usec): min=1882, max=11901, avg=6478.95, stdev=584.46 00:32:12.741 lat (usec): min=1891, max=11903, avg=6480.90, stdev=584.43 00:32:12.741 clat percentiles (usec): 00:32:12.741 | 1.00th=[ 5145], 5.00th=[ 5604], 10.00th=[ 5800], 20.00th=[ 6063], 00:32:12.741 | 30.00th=[ 6194], 40.00th=[ 6325], 50.00th=[ 6456], 60.00th=[ 6587], 00:32:12.741 | 70.00th=[ 6783], 80.00th=[ 6915], 90.00th=[ 7177], 95.00th=[ 7373], 00:32:12.741 | 99.00th=[ 7832], 99.50th=[ 8094], 99.90th=[10683], 99.95th=[10945], 00:32:12.741 | 99.99th=[11863] 00:32:12.741 bw ( KiB/s): min=34256, max=35200, per=100.00%, avg=34740.00, stdev=438.64, samples=4 00:32:12.741 iops : min= 8564, max= 8800, avg=8685.00, stdev=109.66, samples=4 00:32:12.741 lat (msec) : 2=0.01%, 4=0.11%, 10=99.61%, 20=0.28% 00:32:12.741 cpu : usr=86.69%, sys=12.96%, ctx=3, majf=0, minf=1522 00:32:12.741 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:32:12.741 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:12.741 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:32:12.741 issued rwts: total=17443,17431,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:12.741 latency : target=0, window=0, percentile=100.00%, depth=128 00:32:12.741 00:32:12.741 Run status group 0 (all jobs): 00:32:12.741 READ: bw=33.9MiB/s (35.6MB/s), 33.9MiB/s-33.9MiB/s (35.6MB/s-35.6MB/s), io=68.1MiB (71.4MB), run=2007-2007msec 00:32:12.741 WRITE: bw=33.9MiB/s (35.6MB/s), 33.9MiB/s-33.9MiB/s (35.6MB/s-35.6MB/s), io=68.1MiB (71.4MB), run=2007-2007msec 00:32:12.741 ----------------------------------------------------- 00:32:12.741 Suppressions used: 00:32:12.741 count bytes template 00:32:12.741 1 58 /usr/src/fio/parse.c 00:32:12.741 1 8 libtcmalloc_minimal.so 00:32:12.741 ----------------------------------------------------- 00:32:12.741 00:32:12.999 04:29:27 -- host/fio.sh@70 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:32:12.999 04:29:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:12.999 04:29:27 -- common/autotest_common.sh@10 -- # set +x 00:32:12.999 04:29:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:12.999 04:29:27 -- host/fio.sh@72 -- # sync 00:32:12.999 04:29:27 -- host/fio.sh@74 -- # rpc_cmd bdev_lvol_delete lvs_n_0/lbd_nest_0 00:32:12.999 04:29:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:12.999 04:29:27 -- common/autotest_common.sh@10 -- # set +x 00:32:21.173 04:29:35 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:21.173 04:29:35 -- host/fio.sh@75 -- # rpc_cmd bdev_lvol_delete_lvstore -l lvs_n_0 00:32:21.173 04:29:35 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:21.173 04:29:35 -- common/autotest_common.sh@10 -- # set +x 00:32:21.173 04:29:35 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:21.173 04:29:35 -- host/fio.sh@76 -- # rpc_cmd bdev_lvol_delete lvs_0/lbd_0 00:32:21.173 04:29:35 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:21.173 04:29:35 -- common/autotest_common.sh@10 -- # set +x 00:32:26.447 04:29:40 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:26.447 04:29:40 -- host/fio.sh@77 -- # rpc_cmd bdev_lvol_delete_lvstore -l lvs_0 00:32:26.447 04:29:40 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:26.447 04:29:40 -- common/autotest_common.sh@10 -- # set +x 00:32:26.447 04:29:40 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:26.447 04:29:40 -- host/fio.sh@78 -- # rpc_cmd bdev_nvme_detach_controller Nvme0 00:32:26.447 04:29:40 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:26.447 04:29:40 -- common/autotest_common.sh@10 -- # set +x 00:32:29.734 04:29:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:29.734 04:29:43 -- host/fio.sh@81 -- # trap - SIGINT SIGTERM EXIT 00:32:29.734 04:29:43 -- host/fio.sh@83 -- # rm -f ./local-test-0-verify.state 00:32:29.734 04:29:43 -- host/fio.sh@84 -- # nvmftestfini 00:32:29.734 04:29:43 -- nvmf/common.sh@476 -- # nvmfcleanup 00:32:29.734 04:29:43 -- nvmf/common.sh@116 -- # sync 00:32:29.734 04:29:43 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:32:29.734 04:29:43 -- nvmf/common.sh@119 -- # set +e 00:32:29.734 04:29:43 -- nvmf/common.sh@120 -- # for i in {1..20} 00:32:29.734 04:29:43 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:32:29.734 rmmod nvme_tcp 00:32:29.734 rmmod nvme_fabrics 00:32:29.734 rmmod nvme_keyring 00:32:29.734 04:29:43 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:32:29.734 04:29:43 -- nvmf/common.sh@123 -- # set -e 00:32:29.734 04:29:43 -- nvmf/common.sh@124 -- # return 0 00:32:29.734 04:29:43 -- nvmf/common.sh@477 -- # '[' -n 13372 ']' 00:32:29.734 04:29:43 -- nvmf/common.sh@478 -- # killprocess 13372 00:32:29.734 04:29:43 -- common/autotest_common.sh@926 -- # '[' -z 13372 ']' 00:32:29.734 04:29:43 -- common/autotest_common.sh@930 -- # kill -0 13372 00:32:29.734 04:29:43 -- common/autotest_common.sh@931 -- # uname 00:32:29.734 04:29:43 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:32:29.734 04:29:43 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 13372 00:32:29.734 04:29:44 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:32:29.734 04:29:44 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:32:29.734 04:29:44 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 13372' 00:32:29.734 killing process with pid 13372 00:32:29.734 04:29:44 -- common/autotest_common.sh@945 -- # kill 13372 00:32:29.734 04:29:44 -- common/autotest_common.sh@950 -- # wait 13372 00:32:29.991 04:29:44 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:32:29.992 04:29:44 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:32:29.992 04:29:44 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:32:29.992 04:29:44 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:32:29.992 04:29:44 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:32:29.992 04:29:44 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:29.992 04:29:44 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:32:29.992 04:29:44 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:32.529 04:29:46 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:32:32.529 00:32:32.529 real 0m49.466s 00:32:32.529 user 3m55.403s 00:32:32.529 sys 0m8.988s 00:32:32.529 04:29:46 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:32:32.529 04:29:46 -- common/autotest_common.sh@10 -- # set +x 00:32:32.529 ************************************ 00:32:32.529 END TEST nvmf_fio_host 00:32:32.529 ************************************ 00:32:32.529 04:29:46 -- nvmf/nvmf.sh@99 -- # run_test nvmf_failover /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:32:32.529 04:29:46 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:32:32.529 04:29:46 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:32:32.529 04:29:46 -- common/autotest_common.sh@10 -- # set +x 00:32:32.529 ************************************ 00:32:32.529 START TEST nvmf_failover 00:32:32.529 ************************************ 00:32:32.529 04:29:46 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:32:32.529 * Looking for test storage... 00:32:32.529 * Found test storage at /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/host 00:32:32.529 04:29:46 -- host/failover.sh@9 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/common.sh 00:32:32.530 04:29:46 -- nvmf/common.sh@7 -- # uname -s 00:32:32.530 04:29:46 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:32.530 04:29:46 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:32.530 04:29:46 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:32.530 04:29:46 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:32.530 04:29:46 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:32.530 04:29:46 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:32.530 04:29:46 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:32.530 04:29:46 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:32.530 04:29:46 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:32.530 04:29:46 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:32.530 04:29:46 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80ef6226-405e-ee11-906e-a4bf01973fda 00:32:32.530 04:29:46 -- nvmf/common.sh@18 -- # NVME_HOSTID=80ef6226-405e-ee11-906e-a4bf01973fda 00:32:32.530 04:29:46 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:32.530 04:29:46 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:32.530 04:29:46 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:32:32.530 04:29:46 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/common.sh 00:32:32.530 04:29:46 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:32.530 04:29:46 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:32.530 04:29:46 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:32.530 04:29:46 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:32.530 04:29:46 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:32.530 04:29:46 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:32.530 04:29:46 -- paths/export.sh@5 -- # export PATH 00:32:32.530 04:29:46 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:32.530 04:29:46 -- nvmf/common.sh@46 -- # : 0 00:32:32.530 04:29:46 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:32:32.530 04:29:46 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:32:32.530 04:29:46 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:32:32.530 04:29:46 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:32.530 04:29:46 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:32.530 04:29:46 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:32:32.530 04:29:46 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:32:32.530 04:29:46 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:32:32.530 04:29:46 -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:32:32.530 04:29:46 -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:32:32.530 04:29:46 -- host/failover.sh@14 -- # rpc_py=/var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py 00:32:32.530 04:29:46 -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:32:32.530 04:29:46 -- host/failover.sh@18 -- # nvmftestinit 00:32:32.530 04:29:46 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:32:32.530 04:29:46 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:32.530 04:29:46 -- nvmf/common.sh@436 -- # prepare_net_devs 00:32:32.530 04:29:46 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:32:32.530 04:29:46 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:32:32.530 04:29:46 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:32.530 04:29:46 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:32:32.530 04:29:46 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:32.530 04:29:46 -- nvmf/common.sh@402 -- # [[ phy-fallback != virt ]] 00:32:32.530 04:29:46 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:32:32.530 04:29:46 -- nvmf/common.sh@284 -- # xtrace_disable 00:32:32.530 04:29:46 -- common/autotest_common.sh@10 -- # set +x 00:32:37.800 04:29:52 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:32:37.800 04:29:52 -- nvmf/common.sh@290 -- # pci_devs=() 00:32:37.800 04:29:52 -- nvmf/common.sh@290 -- # local -a pci_devs 00:32:37.800 04:29:52 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:32:37.800 04:29:52 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:32:37.800 04:29:52 -- nvmf/common.sh@292 -- # pci_drivers=() 00:32:37.800 04:29:52 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:32:37.800 04:29:52 -- nvmf/common.sh@294 -- # net_devs=() 00:32:37.800 04:29:52 -- nvmf/common.sh@294 -- # local -ga net_devs 00:32:37.800 04:29:52 -- nvmf/common.sh@295 -- # e810=() 00:32:37.800 04:29:52 -- nvmf/common.sh@295 -- # local -ga e810 00:32:37.800 04:29:52 -- nvmf/common.sh@296 -- # x722=() 00:32:37.800 04:29:52 -- nvmf/common.sh@296 -- # local -ga x722 00:32:37.800 04:29:52 -- nvmf/common.sh@297 -- # mlx=() 00:32:37.800 04:29:52 -- nvmf/common.sh@297 -- # local -ga mlx 00:32:37.800 04:29:52 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:37.800 04:29:52 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:37.800 04:29:52 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:37.800 04:29:52 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:37.800 04:29:52 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:37.800 04:29:52 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:37.800 04:29:52 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:37.800 04:29:52 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:37.800 04:29:52 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:37.800 04:29:52 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:37.800 04:29:52 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:37.800 04:29:52 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:32:37.800 04:29:52 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:32:37.800 04:29:52 -- nvmf/common.sh@326 -- # [[ '' == mlx5 ]] 00:32:37.800 04:29:52 -- nvmf/common.sh@328 -- # [[ '' == e810 ]] 00:32:37.800 04:29:52 -- nvmf/common.sh@330 -- # [[ '' == x722 ]] 00:32:37.800 04:29:52 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:32:37.800 04:29:52 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:32:37.800 04:29:52 -- nvmf/common.sh@340 -- # echo 'Found 0000:27:00.0 (0x8086 - 0x159b)' 00:32:37.800 Found 0000:27:00.0 (0x8086 - 0x159b) 00:32:37.800 04:29:52 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:32:37.800 04:29:52 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:32:37.800 04:29:52 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:37.800 04:29:52 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:37.800 04:29:52 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:32:37.800 04:29:52 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:32:37.800 04:29:52 -- nvmf/common.sh@340 -- # echo 'Found 0000:27:00.1 (0x8086 - 0x159b)' 00:32:37.800 Found 0000:27:00.1 (0x8086 - 0x159b) 00:32:37.800 04:29:52 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:32:37.801 04:29:52 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:32:37.801 04:29:52 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:37.801 04:29:52 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:37.801 04:29:52 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:32:37.801 04:29:52 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:32:37.801 04:29:52 -- nvmf/common.sh@371 -- # [[ '' == e810 ]] 00:32:37.801 04:29:52 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:32:37.801 04:29:52 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:37.801 04:29:52 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:32:37.801 04:29:52 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:37.801 04:29:52 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:27:00.0: cvl_0_0' 00:32:37.801 Found net devices under 0000:27:00.0: cvl_0_0 00:32:37.801 04:29:52 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:32:37.801 04:29:52 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:32:37.801 04:29:52 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:37.801 04:29:52 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:32:37.801 04:29:52 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:37.801 04:29:52 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:27:00.1: cvl_0_1' 00:32:37.801 Found net devices under 0000:27:00.1: cvl_0_1 00:32:37.801 04:29:52 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:32:37.801 04:29:52 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:32:37.801 04:29:52 -- nvmf/common.sh@402 -- # is_hw=yes 00:32:37.801 04:29:52 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:32:37.801 04:29:52 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:32:37.801 04:29:52 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:32:37.801 04:29:52 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:37.801 04:29:52 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:37.801 04:29:52 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:37.801 04:29:52 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:32:37.801 04:29:52 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:37.801 04:29:52 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:37.801 04:29:52 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:32:37.801 04:29:52 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:37.801 04:29:52 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:37.801 04:29:52 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:32:37.801 04:29:52 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:32:37.801 04:29:52 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:32:37.801 04:29:52 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:38.061 04:29:52 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:38.061 04:29:52 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:38.061 04:29:52 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:32:38.061 04:29:52 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:38.061 04:29:52 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:38.061 04:29:52 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:38.061 04:29:52 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:32:38.061 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:38.061 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.640 ms 00:32:38.061 00:32:38.061 --- 10.0.0.2 ping statistics --- 00:32:38.061 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:38.061 rtt min/avg/max/mdev = 0.640/0.640/0.640/0.000 ms 00:32:38.061 04:29:52 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:38.061 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:38.061 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.421 ms 00:32:38.061 00:32:38.061 --- 10.0.0.1 ping statistics --- 00:32:38.061 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:38.061 rtt min/avg/max/mdev = 0.421/0.421/0.421/0.000 ms 00:32:38.061 04:29:52 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:38.061 04:29:52 -- nvmf/common.sh@410 -- # return 0 00:32:38.061 04:29:52 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:32:38.061 04:29:52 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:38.061 04:29:52 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:32:38.061 04:29:52 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:32:38.061 04:29:52 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:38.061 04:29:52 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:32:38.061 04:29:52 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:32:38.061 04:29:52 -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:32:38.061 04:29:52 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:32:38.061 04:29:52 -- common/autotest_common.sh@712 -- # xtrace_disable 00:32:38.061 04:29:52 -- common/autotest_common.sh@10 -- # set +x 00:32:38.061 04:29:52 -- nvmf/common.sh@469 -- # nvmfpid=26265 00:32:38.061 04:29:52 -- nvmf/common.sh@470 -- # waitforlisten 26265 00:32:38.061 04:29:52 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:32:38.061 04:29:52 -- common/autotest_common.sh@819 -- # '[' -z 26265 ']' 00:32:38.061 04:29:52 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:38.061 04:29:52 -- common/autotest_common.sh@824 -- # local max_retries=100 00:32:38.061 04:29:52 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:38.061 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:38.061 04:29:52 -- common/autotest_common.sh@828 -- # xtrace_disable 00:32:38.061 04:29:52 -- common/autotest_common.sh@10 -- # set +x 00:32:38.322 [2024-05-14 04:29:52.670749] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:32:38.322 [2024-05-14 04:29:52.670878] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:38.322 EAL: No free 2048 kB hugepages reported on node 1 00:32:38.322 [2024-05-14 04:29:52.807322] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:32:38.322 [2024-05-14 04:29:52.907946] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:32:38.322 [2024-05-14 04:29:52.908152] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:38.322 [2024-05-14 04:29:52.908168] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:38.322 [2024-05-14 04:29:52.908181] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:38.583 [2024-05-14 04:29:52.910027] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:32:38.583 [2024-05-14 04:29:52.910073] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:32:38.583 [2024-05-14 04:29:52.910080] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:32:38.842 04:29:53 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:32:38.842 04:29:53 -- common/autotest_common.sh@852 -- # return 0 00:32:38.842 04:29:53 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:32:38.842 04:29:53 -- common/autotest_common.sh@718 -- # xtrace_disable 00:32:38.842 04:29:53 -- common/autotest_common.sh@10 -- # set +x 00:32:38.842 04:29:53 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:38.842 04:29:53 -- host/failover.sh@22 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:32:39.106 [2024-05-14 04:29:53.561243] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:39.106 04:29:53 -- host/failover.sh@23 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:32:39.366 Malloc0 00:32:39.366 04:29:53 -- host/failover.sh@24 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:32:39.366 04:29:53 -- host/failover.sh@25 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:32:39.626 04:29:54 -- host/failover.sh@26 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:32:39.887 [2024-05-14 04:29:54.217534] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:39.887 04:29:54 -- host/failover.sh@27 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:32:39.887 [2024-05-14 04:29:54.369561] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:32:39.887 04:29:54 -- host/failover.sh@28 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:32:40.149 [2024-05-14 04:29:54.521785] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:32:40.149 04:29:54 -- host/failover.sh@31 -- # bdevperf_pid=26598 00:32:40.149 04:29:54 -- host/failover.sh@30 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:32:40.149 04:29:54 -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:32:40.149 04:29:54 -- host/failover.sh@34 -- # waitforlisten 26598 /var/tmp/bdevperf.sock 00:32:40.149 04:29:54 -- common/autotest_common.sh@819 -- # '[' -z 26598 ']' 00:32:40.149 04:29:54 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:32:40.149 04:29:54 -- common/autotest_common.sh@824 -- # local max_retries=100 00:32:40.149 04:29:54 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:32:40.149 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:32:40.149 04:29:54 -- common/autotest_common.sh@828 -- # xtrace_disable 00:32:40.149 04:29:54 -- common/autotest_common.sh@10 -- # set +x 00:32:41.142 04:29:55 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:32:41.142 04:29:55 -- common/autotest_common.sh@852 -- # return 0 00:32:41.142 04:29:55 -- host/failover.sh@35 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:32:41.425 NVMe0n1 00:32:41.425 04:29:55 -- host/failover.sh@36 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:32:41.685 00:32:41.685 04:29:56 -- host/failover.sh@39 -- # run_test_pid=26911 00:32:41.685 04:29:56 -- host/failover.sh@41 -- # sleep 1 00:32:41.685 04:29:56 -- host/failover.sh@38 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:32:42.619 04:29:57 -- host/failover.sh@43 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:32:42.878 [2024-05-14 04:29:57.283690] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:32:42.878 [2024-05-14 04:29:57.283748] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:32:42.878 [2024-05-14 04:29:57.283757] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:32:42.878 [2024-05-14 04:29:57.283765] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:32:42.878 [2024-05-14 04:29:57.283773] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:32:42.878 [2024-05-14 04:29:57.283780] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:32:42.878 [2024-05-14 04:29:57.283787] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:32:42.878 [2024-05-14 04:29:57.283794] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:32:42.878 [2024-05-14 04:29:57.283801] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:32:42.878 [2024-05-14 04:29:57.283808] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:32:42.878 [2024-05-14 04:29:57.283815] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:32:42.878 [2024-05-14 04:29:57.283822] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:32:42.878 [2024-05-14 04:29:57.283829] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:32:42.878 [2024-05-14 04:29:57.283836] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:32:42.878 [2024-05-14 04:29:57.283843] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:32:42.878 [2024-05-14 04:29:57.283849] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:32:42.878 [2024-05-14 04:29:57.283856] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:32:42.878 [2024-05-14 04:29:57.283863] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:32:42.878 [2024-05-14 04:29:57.283870] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:32:42.878 [2024-05-14 04:29:57.283877] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:32:42.878 [2024-05-14 04:29:57.283884] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:32:42.878 [2024-05-14 04:29:57.283891] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:32:42.878 [2024-05-14 04:29:57.283898] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:32:42.878 [2024-05-14 04:29:57.283911] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:32:42.878 [2024-05-14 04:29:57.283918] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:32:42.878 [2024-05-14 04:29:57.283925] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:32:42.878 [2024-05-14 04:29:57.283932] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:32:42.878 [2024-05-14 04:29:57.283940] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:32:42.878 [2024-05-14 04:29:57.283947] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:32:42.878 [2024-05-14 04:29:57.283955] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:32:42.878 [2024-05-14 04:29:57.283962] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:32:42.878 [2024-05-14 04:29:57.283968] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:32:42.878 [2024-05-14 04:29:57.283975] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:32:42.878 [2024-05-14 04:29:57.283982] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:32:42.878 [2024-05-14 04:29:57.283989] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:32:42.878 [2024-05-14 04:29:57.283996] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:32:42.878 [2024-05-14 04:29:57.284003] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:32:42.878 [2024-05-14 04:29:57.284010] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:32:42.878 [2024-05-14 04:29:57.284017] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:32:42.878 [2024-05-14 04:29:57.284024] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:32:42.878 [2024-05-14 04:29:57.284031] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:32:42.878 [2024-05-14 04:29:57.284038] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:32:42.878 [2024-05-14 04:29:57.284045] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:32:42.878 [2024-05-14 04:29:57.284052] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:32:42.878 [2024-05-14 04:29:57.284059] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:32:42.878 [2024-05-14 04:29:57.284066] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:32:42.878 [2024-05-14 04:29:57.284072] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:32:42.878 [2024-05-14 04:29:57.284080] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:32:42.878 [2024-05-14 04:29:57.284087] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:32:42.878 [2024-05-14 04:29:57.284097] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:32:42.878 [2024-05-14 04:29:57.284105] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:32:42.878 [2024-05-14 04:29:57.284111] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:32:42.878 [2024-05-14 04:29:57.284118] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:32:42.878 [2024-05-14 04:29:57.284125] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:32:42.878 [2024-05-14 04:29:57.284132] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:32:42.878 [2024-05-14 04:29:57.284139] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:32:42.878 [2024-05-14 04:29:57.284147] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:32:42.878 [2024-05-14 04:29:57.284154] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:32:42.878 [2024-05-14 04:29:57.284161] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:32:42.878 [2024-05-14 04:29:57.284168] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:32:42.878 [2024-05-14 04:29:57.284176] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:32:42.878 [2024-05-14 04:29:57.284183] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:32:42.878 [2024-05-14 04:29:57.284194] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:32:42.878 [2024-05-14 04:29:57.284201] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:32:42.878 [2024-05-14 04:29:57.284207] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:32:42.878 [2024-05-14 04:29:57.284214] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:32:42.878 [2024-05-14 04:29:57.284221] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:32:42.878 [2024-05-14 04:29:57.284229] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:32:42.878 [2024-05-14 04:29:57.284237] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:32:42.878 [2024-05-14 04:29:57.284244] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:32:42.878 [2024-05-14 04:29:57.284252] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:32:42.879 [2024-05-14 04:29:57.284259] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:32:42.879 [2024-05-14 04:29:57.284266] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:32:42.879 [2024-05-14 04:29:57.284272] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:32:42.879 [2024-05-14 04:29:57.284279] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:32:42.879 04:29:57 -- host/failover.sh@45 -- # sleep 3 00:32:46.163 04:30:00 -- host/failover.sh@47 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:32:46.163 00:32:46.163 04:30:00 -- host/failover.sh@48 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:32:46.421 [2024-05-14 04:30:00.791131] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:32:46.421 [2024-05-14 04:30:00.791197] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:32:46.421 [2024-05-14 04:30:00.791206] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:32:46.421 [2024-05-14 04:30:00.791214] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:32:46.421 [2024-05-14 04:30:00.791222] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:32:46.421 [2024-05-14 04:30:00.791229] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:32:46.421 [2024-05-14 04:30:00.791237] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:32:46.421 [2024-05-14 04:30:00.791244] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:32:46.421 [2024-05-14 04:30:00.791252] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:32:46.421 [2024-05-14 04:30:00.791259] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:32:46.421 [2024-05-14 04:30:00.791266] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:32:46.421 [2024-05-14 04:30:00.791273] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:32:46.421 [2024-05-14 04:30:00.791280] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:32:46.421 [2024-05-14 04:30:00.791288] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:32:46.421 [2024-05-14 04:30:00.791295] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:32:46.421 [2024-05-14 04:30:00.791302] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:32:46.421 [2024-05-14 04:30:00.791310] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:32:46.421 [2024-05-14 04:30:00.791316] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:32:46.421 [2024-05-14 04:30:00.791323] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:32:46.421 [2024-05-14 04:30:00.791330] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:32:46.421 [2024-05-14 04:30:00.791337] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:32:46.421 [2024-05-14 04:30:00.791344] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:32:46.421 [2024-05-14 04:30:00.791351] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:32:46.421 [2024-05-14 04:30:00.791359] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:32:46.421 [2024-05-14 04:30:00.791371] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:32:46.421 [2024-05-14 04:30:00.791379] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:32:46.421 [2024-05-14 04:30:00.791386] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:32:46.421 [2024-05-14 04:30:00.791393] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:32:46.421 [2024-05-14 04:30:00.791400] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:32:46.421 [2024-05-14 04:30:00.791407] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:32:46.421 [2024-05-14 04:30:00.791415] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:32:46.421 [2024-05-14 04:30:00.791422] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:32:46.421 [2024-05-14 04:30:00.791429] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:32:46.421 [2024-05-14 04:30:00.791436] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:32:46.421 [2024-05-14 04:30:00.791443] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:32:46.421 [2024-05-14 04:30:00.791450] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:32:46.421 [2024-05-14 04:30:00.791458] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:32:46.421 [2024-05-14 04:30:00.791465] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:32:46.421 [2024-05-14 04:30:00.791473] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:32:46.421 [2024-05-14 04:30:00.791481] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:32:46.421 [2024-05-14 04:30:00.791488] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:32:46.422 [2024-05-14 04:30:00.791497] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:32:46.422 [2024-05-14 04:30:00.791505] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:32:46.422 [2024-05-14 04:30:00.791512] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:32:46.422 [2024-05-14 04:30:00.791520] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:32:46.422 [2024-05-14 04:30:00.791527] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:32:46.422 [2024-05-14 04:30:00.791534] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:32:46.422 [2024-05-14 04:30:00.791541] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:32:46.422 [2024-05-14 04:30:00.791549] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:32:46.422 [2024-05-14 04:30:00.791556] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:32:46.422 [2024-05-14 04:30:00.791566] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:32:46.422 [2024-05-14 04:30:00.791573] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:32:46.422 [2024-05-14 04:30:00.791581] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:32:46.422 04:30:00 -- host/failover.sh@50 -- # sleep 3 00:32:49.707 04:30:03 -- host/failover.sh@53 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:32:49.707 [2024-05-14 04:30:03.933294] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:49.707 04:30:03 -- host/failover.sh@55 -- # sleep 1 00:32:50.644 04:30:04 -- host/failover.sh@57 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:32:50.644 [2024-05-14 04:30:05.105455] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:32:50.644 [2024-05-14 04:30:05.105523] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:32:50.644 [2024-05-14 04:30:05.105533] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:32:50.644 [2024-05-14 04:30:05.105540] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:32:50.644 [2024-05-14 04:30:05.105548] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:32:50.644 [2024-05-14 04:30:05.105556] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:32:50.644 [2024-05-14 04:30:05.105564] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:32:50.644 [2024-05-14 04:30:05.105574] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:32:50.644 [2024-05-14 04:30:05.105584] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:32:50.644 [2024-05-14 04:30:05.105593] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:32:50.644 [2024-05-14 04:30:05.105601] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:32:50.644 [2024-05-14 04:30:05.105608] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:32:50.644 [2024-05-14 04:30:05.105616] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:32:50.644 [2024-05-14 04:30:05.105623] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:32:50.644 [2024-05-14 04:30:05.105633] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:32:50.644 [2024-05-14 04:30:05.105640] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:32:50.644 [2024-05-14 04:30:05.105647] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:32:50.644 [2024-05-14 04:30:05.105654] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:32:50.644 [2024-05-14 04:30:05.105662] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:32:50.644 [2024-05-14 04:30:05.105669] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:32:50.644 [2024-05-14 04:30:05.105684] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:32:50.644 [2024-05-14 04:30:05.105692] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:32:50.644 [2024-05-14 04:30:05.105699] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:32:50.644 [2024-05-14 04:30:05.105706] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:32:50.644 [2024-05-14 04:30:05.105713] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:32:50.644 [2024-05-14 04:30:05.105721] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:32:50.644 [2024-05-14 04:30:05.105728] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:32:50.644 [2024-05-14 04:30:05.105736] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:32:50.644 [2024-05-14 04:30:05.105745] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:32:50.644 [2024-05-14 04:30:05.105754] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:32:50.644 [2024-05-14 04:30:05.105762] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:32:50.644 [2024-05-14 04:30:05.105769] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:32:50.644 [2024-05-14 04:30:05.105776] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:32:50.644 [2024-05-14 04:30:05.105784] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:32:50.644 [2024-05-14 04:30:05.105791] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:32:50.644 [2024-05-14 04:30:05.105799] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:32:50.644 [2024-05-14 04:30:05.105806] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:32:50.644 04:30:05 -- host/failover.sh@59 -- # wait 26911 00:32:57.212 0 00:32:57.212 04:30:11 -- host/failover.sh@61 -- # killprocess 26598 00:32:57.212 04:30:11 -- common/autotest_common.sh@926 -- # '[' -z 26598 ']' 00:32:57.212 04:30:11 -- common/autotest_common.sh@930 -- # kill -0 26598 00:32:57.212 04:30:11 -- common/autotest_common.sh@931 -- # uname 00:32:57.212 04:30:11 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:32:57.212 04:30:11 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 26598 00:32:57.212 04:30:11 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:32:57.212 04:30:11 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:32:57.212 04:30:11 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 26598' 00:32:57.212 killing process with pid 26598 00:32:57.212 04:30:11 -- common/autotest_common.sh@945 -- # kill 26598 00:32:57.212 04:30:11 -- common/autotest_common.sh@950 -- # wait 26598 00:32:57.212 04:30:11 -- host/failover.sh@63 -- # cat /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/host/try.txt 00:32:57.212 [2024-05-14 04:29:54.624973] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:32:57.212 [2024-05-14 04:29:54.625131] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid26598 ] 00:32:57.212 EAL: No free 2048 kB hugepages reported on node 1 00:32:57.212 [2024-05-14 04:29:54.755604] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:57.212 [2024-05-14 04:29:54.845855] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:32:57.212 Running I/O for 15 seconds... 00:32:57.212 [2024-05-14 04:29:57.284691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:21024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.212 [2024-05-14 04:29:57.284738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.212 [2024-05-14 04:29:57.284764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:21032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.212 [2024-05-14 04:29:57.284774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.212 [2024-05-14 04:29:57.284785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:21040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.212 [2024-05-14 04:29:57.284793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.212 [2024-05-14 04:29:57.284802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.212 [2024-05-14 04:29:57.284810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.212 [2024-05-14 04:29:57.284820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:21088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.212 [2024-05-14 04:29:57.284828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.212 [2024-05-14 04:29:57.284838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:21096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.212 [2024-05-14 04:29:57.284846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.212 [2024-05-14 04:29:57.284855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:21104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.212 [2024-05-14 04:29:57.284863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.212 [2024-05-14 04:29:57.284872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:20376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.212 [2024-05-14 04:29:57.284880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.212 [2024-05-14 04:29:57.284890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:20392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.213 [2024-05-14 04:29:57.284897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.213 [2024-05-14 04:29:57.284907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:20408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.213 [2024-05-14 04:29:57.284915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.213 [2024-05-14 04:29:57.284925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:20424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.213 [2024-05-14 04:29:57.284932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.213 [2024-05-14 04:29:57.284946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:20440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.213 [2024-05-14 04:29:57.284954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.213 [2024-05-14 04:29:57.284964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:20472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.213 [2024-05-14 04:29:57.284971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.213 [2024-05-14 04:29:57.284981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.213 [2024-05-14 04:29:57.284989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.213 [2024-05-14 04:29:57.284999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:20496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.213 [2024-05-14 04:29:57.285006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.213 [2024-05-14 04:29:57.285016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:21120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.213 [2024-05-14 04:29:57.285023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.213 [2024-05-14 04:29:57.285033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:20536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.213 [2024-05-14 04:29:57.285041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.213 [2024-05-14 04:29:57.285050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:20560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.213 [2024-05-14 04:29:57.285058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.213 [2024-05-14 04:29:57.285068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:20584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.213 [2024-05-14 04:29:57.285075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.213 [2024-05-14 04:29:57.285085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:20632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.213 [2024-05-14 04:29:57.285093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.213 [2024-05-14 04:29:57.285102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:20648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.213 [2024-05-14 04:29:57.285110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.213 [2024-05-14 04:29:57.285119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:20656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.213 [2024-05-14 04:29:57.285127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.213 [2024-05-14 04:29:57.285145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:20680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.213 [2024-05-14 04:29:57.285153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.213 [2024-05-14 04:29:57.285162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:20704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.213 [2024-05-14 04:29:57.285174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.213 [2024-05-14 04:29:57.285190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:21144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.213 [2024-05-14 04:29:57.285198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.213 [2024-05-14 04:29:57.285208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:21152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.213 [2024-05-14 04:29:57.285215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.213 [2024-05-14 04:29:57.285225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:21160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.213 [2024-05-14 04:29:57.285233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.213 [2024-05-14 04:29:57.285243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:21184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.213 [2024-05-14 04:29:57.285251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.213 [2024-05-14 04:29:57.285261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:21200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.213 [2024-05-14 04:29:57.285269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.213 [2024-05-14 04:29:57.285279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:21208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.213 [2024-05-14 04:29:57.285286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.213 [2024-05-14 04:29:57.285296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:21216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.213 [2024-05-14 04:29:57.285303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.213 [2024-05-14 04:29:57.285313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:21224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.213 [2024-05-14 04:29:57.285321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.213 [2024-05-14 04:29:57.285330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:21232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.213 [2024-05-14 04:29:57.285338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.213 [2024-05-14 04:29:57.285348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:21248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.213 [2024-05-14 04:29:57.285356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.213 [2024-05-14 04:29:57.285365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:21256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.213 [2024-05-14 04:29:57.285373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.213 [2024-05-14 04:29:57.285382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:21272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.213 [2024-05-14 04:29:57.285390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.213 [2024-05-14 04:29:57.285400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.213 [2024-05-14 04:29:57.285408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.213 [2024-05-14 04:29:57.285418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:21288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.213 [2024-05-14 04:29:57.285425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.213 [2024-05-14 04:29:57.285434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.213 [2024-05-14 04:29:57.285442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.213 [2024-05-14 04:29:57.285452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:20712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.213 [2024-05-14 04:29:57.285459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.213 [2024-05-14 04:29:57.285469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:20720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.213 [2024-05-14 04:29:57.285477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.213 [2024-05-14 04:29:57.285486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:20728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.213 [2024-05-14 04:29:57.285494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.213 [2024-05-14 04:29:57.285504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:20760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.213 [2024-05-14 04:29:57.285511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.213 [2024-05-14 04:29:57.285522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:20792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.213 [2024-05-14 04:29:57.285529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.213 [2024-05-14 04:29:57.285538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:20800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.213 [2024-05-14 04:29:57.285546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.213 [2024-05-14 04:29:57.285556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.213 [2024-05-14 04:29:57.285564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.213 [2024-05-14 04:29:57.285574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:20848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.213 [2024-05-14 04:29:57.285581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.213 [2024-05-14 04:29:57.285591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:21320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.213 [2024-05-14 04:29:57.285598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.213 [2024-05-14 04:29:57.285608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:21336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:57.213 [2024-05-14 04:29:57.285617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.214 [2024-05-14 04:29:57.285627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:21344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.214 [2024-05-14 04:29:57.285633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.214 [2024-05-14 04:29:57.285643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:21352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.214 [2024-05-14 04:29:57.285650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.214 [2024-05-14 04:29:57.285660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:21360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.214 [2024-05-14 04:29:57.285668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.214 [2024-05-14 04:29:57.285677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:21368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.214 [2024-05-14 04:29:57.285685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.214 [2024-05-14 04:29:57.285694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:21376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:57.214 [2024-05-14 04:29:57.285702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.214 [2024-05-14 04:29:57.285713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:21384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.214 [2024-05-14 04:29:57.285721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.214 [2024-05-14 04:29:57.285730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:21392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:57.214 [2024-05-14 04:29:57.285738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.214 [2024-05-14 04:29:57.285748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:21400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:57.214 [2024-05-14 04:29:57.285755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.214 [2024-05-14 04:29:57.285765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:21408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:57.214 [2024-05-14 04:29:57.285773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.214 [2024-05-14 04:29:57.285782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:21416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.214 [2024-05-14 04:29:57.285790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.214 [2024-05-14 04:29:57.285799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:21424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.214 [2024-05-14 04:29:57.285807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.214 [2024-05-14 04:29:57.285816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:21432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:57.214 [2024-05-14 04:29:57.285823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.214 [2024-05-14 04:29:57.285833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:21440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.214 [2024-05-14 04:29:57.285842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.214 [2024-05-14 04:29:57.285851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:21448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.214 [2024-05-14 04:29:57.285859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.214 [2024-05-14 04:29:57.285869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:21456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.214 [2024-05-14 04:29:57.285876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.214 [2024-05-14 04:29:57.285885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:21464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:57.214 [2024-05-14 04:29:57.285893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.214 [2024-05-14 04:29:57.285903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:21472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.214 [2024-05-14 04:29:57.285911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.214 [2024-05-14 04:29:57.285921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:20856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.214 [2024-05-14 04:29:57.285929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.214 [2024-05-14 04:29:57.285938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.214 [2024-05-14 04:29:57.285946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.214 [2024-05-14 04:29:57.285956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:20912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.214 [2024-05-14 04:29:57.285964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.214 [2024-05-14 04:29:57.285974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:20928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.214 [2024-05-14 04:29:57.285981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.214 [2024-05-14 04:29:57.285991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:20936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.214 [2024-05-14 04:29:57.285999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.214 [2024-05-14 04:29:57.286009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:20944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.214 [2024-05-14 04:29:57.286016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.214 [2024-05-14 04:29:57.286026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:20952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.214 [2024-05-14 04:29:57.286033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.214 [2024-05-14 04:29:57.286042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:20960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.214 [2024-05-14 04:29:57.286051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.214 [2024-05-14 04:29:57.286061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:21480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:57.214 [2024-05-14 04:29:57.286069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.214 [2024-05-14 04:29:57.286078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:21488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:57.214 [2024-05-14 04:29:57.286087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.214 [2024-05-14 04:29:57.286096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:21496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:57.214 [2024-05-14 04:29:57.286104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.214 [2024-05-14 04:29:57.286113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:21504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.214 [2024-05-14 04:29:57.286121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.214 [2024-05-14 04:29:57.286131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:21512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.214 [2024-05-14 04:29:57.286138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.214 [2024-05-14 04:29:57.286147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:21520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:57.214 [2024-05-14 04:29:57.286155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.214 [2024-05-14 04:29:57.286165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:21528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:57.214 [2024-05-14 04:29:57.286172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.214 [2024-05-14 04:29:57.286182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:21536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:57.214 [2024-05-14 04:29:57.286195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.214 [2024-05-14 04:29:57.286205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:21544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:57.214 [2024-05-14 04:29:57.286213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.214 [2024-05-14 04:29:57.286223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:21552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.214 [2024-05-14 04:29:57.286230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.214 [2024-05-14 04:29:57.286240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:21560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.214 [2024-05-14 04:29:57.286248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.214 [2024-05-14 04:29:57.286257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:21568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:57.214 [2024-05-14 04:29:57.286265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.214 [2024-05-14 04:29:57.286277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:21576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:57.214 [2024-05-14 04:29:57.286286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.214 [2024-05-14 04:29:57.286296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:21584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.214 [2024-05-14 04:29:57.286303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.214 [2024-05-14 04:29:57.286313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:21592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:57.214 [2024-05-14 04:29:57.286321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.215 [2024-05-14 04:29:57.286330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:21600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:57.215 [2024-05-14 04:29:57.286338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.215 [2024-05-14 04:29:57.286347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:21608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:57.215 [2024-05-14 04:29:57.286354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.215 [2024-05-14 04:29:57.286364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:21616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.215 [2024-05-14 04:29:57.286371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.215 [2024-05-14 04:29:57.286381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:21016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.215 [2024-05-14 04:29:57.286389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.215 [2024-05-14 04:29:57.286399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:21048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.215 [2024-05-14 04:29:57.286407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.215 [2024-05-14 04:29:57.286416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:21056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.215 [2024-05-14 04:29:57.286424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.215 [2024-05-14 04:29:57.286434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:21072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.215 [2024-05-14 04:29:57.286442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.215 [2024-05-14 04:29:57.286452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:21080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.215 [2024-05-14 04:29:57.286460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.215 [2024-05-14 04:29:57.286470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:21112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.215 [2024-05-14 04:29:57.286478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.215 [2024-05-14 04:29:57.286489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:21128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.215 [2024-05-14 04:29:57.286496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.215 [2024-05-14 04:29:57.286507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:21136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.215 [2024-05-14 04:29:57.286515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.215 [2024-05-14 04:29:57.286525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:21624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:57.215 [2024-05-14 04:29:57.286533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.215 [2024-05-14 04:29:57.286543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:21632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:57.215 [2024-05-14 04:29:57.286550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.215 [2024-05-14 04:29:57.286560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:21640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.215 [2024-05-14 04:29:57.286569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.215 [2024-05-14 04:29:57.286578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:21648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:57.215 [2024-05-14 04:29:57.286586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.215 [2024-05-14 04:29:57.286596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:21656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:57.215 [2024-05-14 04:29:57.286604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.215 [2024-05-14 04:29:57.286613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:21664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.215 [2024-05-14 04:29:57.286621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.215 [2024-05-14 04:29:57.286631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:21672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:57.215 [2024-05-14 04:29:57.286638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.215 [2024-05-14 04:29:57.286648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:21680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.215 [2024-05-14 04:29:57.286656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.215 [2024-05-14 04:29:57.286665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:21688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:57.215 [2024-05-14 04:29:57.286673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.215 [2024-05-14 04:29:57.286683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:21696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.215 [2024-05-14 04:29:57.286691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.215 [2024-05-14 04:29:57.286701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:21704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:57.215 [2024-05-14 04:29:57.286709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.215 [2024-05-14 04:29:57.286718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:21712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.215 [2024-05-14 04:29:57.286726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.215 [2024-05-14 04:29:57.286737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:21720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.215 [2024-05-14 04:29:57.286745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.215 [2024-05-14 04:29:57.286755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.215 [2024-05-14 04:29:57.286762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.215 [2024-05-14 04:29:57.286772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:21736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.215 [2024-05-14 04:29:57.286780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.215 [2024-05-14 04:29:57.286789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:21744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.215 [2024-05-14 04:29:57.286797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.215 [2024-05-14 04:29:57.286807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:21752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:57.215 [2024-05-14 04:29:57.286815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.215 [2024-05-14 04:29:57.286824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:21760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:57.215 [2024-05-14 04:29:57.286832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.215 [2024-05-14 04:29:57.286841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:21768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:57.215 [2024-05-14 04:29:57.286850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.215 [2024-05-14 04:29:57.286859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:21776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:57.215 [2024-05-14 04:29:57.286867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.215 [2024-05-14 04:29:57.286877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:21168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.215 [2024-05-14 04:29:57.286886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.215 [2024-05-14 04:29:57.286896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:21176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.215 [2024-05-14 04:29:57.286905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.215 [2024-05-14 04:29:57.286916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:21192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.215 [2024-05-14 04:29:57.286924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.215 [2024-05-14 04:29:57.286934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:21240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.215 [2024-05-14 04:29:57.286942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.215 [2024-05-14 04:29:57.286957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:21264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.215 [2024-05-14 04:29:57.286967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.215 [2024-05-14 04:29:57.286976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:21304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.215 [2024-05-14 04:29:57.286984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.215 [2024-05-14 04:29:57.286994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.215 [2024-05-14 04:29:57.287002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.215 [2024-05-14 04:29:57.287011] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6130000042c0 is same with the state(5) to be set 00:32:57.215 [2024-05-14 04:29:57.287023] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:57.215 [2024-05-14 04:29:57.287032] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:57.215 [2024-05-14 04:29:57.287042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21328 len:8 PRP1 0x0 PRP2 0x0 00:32:57.215 [2024-05-14 04:29:57.287051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.215 [2024-05-14 04:29:57.287174] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x6130000042c0 was disconnected and freed. reset controller. 00:32:57.216 [2024-05-14 04:29:57.287202] bdev_nvme.c:1843:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:32:57.216 [2024-05-14 04:29:57.287234] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:32:57.216 [2024-05-14 04:29:57.287244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.216 [2024-05-14 04:29:57.287255] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:32:57.216 [2024-05-14 04:29:57.287262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.216 [2024-05-14 04:29:57.287271] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:32:57.216 [2024-05-14 04:29:57.287279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.216 [2024-05-14 04:29:57.287287] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:32:57.216 [2024-05-14 04:29:57.287295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.216 [2024-05-14 04:29:57.287303] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:57.216 [2024-05-14 04:29:57.289008] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:57.216 [2024-05-14 04:29:57.289036] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x613000003300 (9): Bad file descriptor 00:32:57.216 [2024-05-14 04:29:57.442722] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:32:57.216 [2024-05-14 04:30:00.791684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:71624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.216 [2024-05-14 04:30:00.791734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.216 [2024-05-14 04:30:00.791760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:71632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.216 [2024-05-14 04:30:00.791774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.216 [2024-05-14 04:30:00.791784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:71640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.216 [2024-05-14 04:30:00.791792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.216 [2024-05-14 04:30:00.791802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:71648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.216 [2024-05-14 04:30:00.791810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.216 [2024-05-14 04:30:00.791819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:71656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.216 [2024-05-14 04:30:00.791826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.216 [2024-05-14 04:30:00.791836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:71688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.216 [2024-05-14 04:30:00.791844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.216 [2024-05-14 04:30:00.791854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:71704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.216 [2024-05-14 04:30:00.791862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.216 [2024-05-14 04:30:00.791872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:71712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.216 [2024-05-14 04:30:00.791880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.216 [2024-05-14 04:30:00.791890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:72248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.216 [2024-05-14 04:30:00.791898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.216 [2024-05-14 04:30:00.791908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:72288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.216 [2024-05-14 04:30:00.791915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.216 [2024-05-14 04:30:00.791925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:72304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.216 [2024-05-14 04:30:00.791943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.216 [2024-05-14 04:30:00.791953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:72320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.216 [2024-05-14 04:30:00.791961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.216 [2024-05-14 04:30:00.791971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:72344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.216 [2024-05-14 04:30:00.791979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.216 [2024-05-14 04:30:00.791988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:72376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.216 [2024-05-14 04:30:00.791996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.216 [2024-05-14 04:30:00.792011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:72384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.216 [2024-05-14 04:30:00.792019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.216 [2024-05-14 04:30:00.792029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:71728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.216 [2024-05-14 04:30:00.792037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.216 [2024-05-14 04:30:00.792047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:71808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.216 [2024-05-14 04:30:00.792055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.216 [2024-05-14 04:30:00.792065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:71832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.216 [2024-05-14 04:30:00.792073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.216 [2024-05-14 04:30:00.792083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:71840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.216 [2024-05-14 04:30:00.792091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.216 [2024-05-14 04:30:00.792101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:71856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.216 [2024-05-14 04:30:00.792109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.216 [2024-05-14 04:30:00.792119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:71864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.216 [2024-05-14 04:30:00.792127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.216 [2024-05-14 04:30:00.792137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:71920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.216 [2024-05-14 04:30:00.792144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.216 [2024-05-14 04:30:00.792154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:71928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.216 [2024-05-14 04:30:00.792162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.216 [2024-05-14 04:30:00.792172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:72400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.216 [2024-05-14 04:30:00.792179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.216 [2024-05-14 04:30:00.792194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:72416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.216 [2024-05-14 04:30:00.792202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.216 [2024-05-14 04:30:00.792212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:72424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.216 [2024-05-14 04:30:00.792219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.216 [2024-05-14 04:30:00.792229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:72432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.216 [2024-05-14 04:30:00.792237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.216 [2024-05-14 04:30:00.792247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:72448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.216 [2024-05-14 04:30:00.792254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.216 [2024-05-14 04:30:00.792264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:72456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.216 [2024-05-14 04:30:00.792272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.216 [2024-05-14 04:30:00.792283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:72464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:57.216 [2024-05-14 04:30:00.792291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.216 [2024-05-14 04:30:00.792301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:72472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.216 [2024-05-14 04:30:00.792309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.216 [2024-05-14 04:30:00.792319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:72480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.216 [2024-05-14 04:30:00.792326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.217 [2024-05-14 04:30:00.792335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:72488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.217 [2024-05-14 04:30:00.792344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.217 [2024-05-14 04:30:00.792354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:72496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.217 [2024-05-14 04:30:00.792362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.217 [2024-05-14 04:30:00.792372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:72504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.217 [2024-05-14 04:30:00.792380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.217 [2024-05-14 04:30:00.792390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:72512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:57.217 [2024-05-14 04:30:00.792397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.217 [2024-05-14 04:30:00.792406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:72520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:57.217 [2024-05-14 04:30:00.792414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.217 [2024-05-14 04:30:00.792424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:72528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.217 [2024-05-14 04:30:00.792432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.217 [2024-05-14 04:30:00.792442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:72536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.217 [2024-05-14 04:30:00.792449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.217 [2024-05-14 04:30:00.792460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:72544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:57.217 [2024-05-14 04:30:00.792468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.217 [2024-05-14 04:30:00.792477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:72552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:57.217 [2024-05-14 04:30:00.792488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.217 [2024-05-14 04:30:00.792498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:72560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:57.217 [2024-05-14 04:30:00.792506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.217 [2024-05-14 04:30:00.792515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:72568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:57.217 [2024-05-14 04:30:00.792523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.217 [2024-05-14 04:30:00.792532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:72576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:57.217 [2024-05-14 04:30:00.792540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.217 [2024-05-14 04:30:00.792549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:72584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:57.217 [2024-05-14 04:30:00.792556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.217 [2024-05-14 04:30:00.792566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:72592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.217 [2024-05-14 04:30:00.792574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.217 [2024-05-14 04:30:00.792584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:72600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.217 [2024-05-14 04:30:00.792591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.217 [2024-05-14 04:30:00.792601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:72608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.217 [2024-05-14 04:30:00.792608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.217 [2024-05-14 04:30:00.792619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:72616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.217 [2024-05-14 04:30:00.792626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.217 [2024-05-14 04:30:00.792636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:72624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:57.217 [2024-05-14 04:30:00.792643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.217 [2024-05-14 04:30:00.792653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:72632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:57.217 [2024-05-14 04:30:00.792660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.217 [2024-05-14 04:30:00.792670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:72640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:57.217 [2024-05-14 04:30:00.792677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.217 [2024-05-14 04:30:00.792688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:72648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:57.217 [2024-05-14 04:30:00.792696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.217 [2024-05-14 04:30:00.792705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:72656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.217 [2024-05-14 04:30:00.792713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.217 [2024-05-14 04:30:00.792723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:72664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:57.217 [2024-05-14 04:30:00.792731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.217 [2024-05-14 04:30:00.792740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:72672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:57.217 [2024-05-14 04:30:00.792748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.217 [2024-05-14 04:30:00.792758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:72680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.217 [2024-05-14 04:30:00.792766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.217 [2024-05-14 04:30:00.792775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:72688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:57.217 [2024-05-14 04:30:00.792782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.217 [2024-05-14 04:30:00.792792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:72696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.217 [2024-05-14 04:30:00.792801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.217 [2024-05-14 04:30:00.792811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:71936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.217 [2024-05-14 04:30:00.792819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.217 [2024-05-14 04:30:00.792829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:71944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.217 [2024-05-14 04:30:00.792836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.217 [2024-05-14 04:30:00.792847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:71968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.217 [2024-05-14 04:30:00.792854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.217 [2024-05-14 04:30:00.792863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:71984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.217 [2024-05-14 04:30:00.792871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.217 [2024-05-14 04:30:00.792881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:72040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.217 [2024-05-14 04:30:00.792889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.217 [2024-05-14 04:30:00.792898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:72048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.217 [2024-05-14 04:30:00.792909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.217 [2024-05-14 04:30:00.792918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:72064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.217 [2024-05-14 04:30:00.792926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.217 [2024-05-14 04:30:00.792936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:72080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.217 [2024-05-14 04:30:00.792943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.217 [2024-05-14 04:30:00.792953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:72704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:57.217 [2024-05-14 04:30:00.792961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.217 [2024-05-14 04:30:00.792971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:72712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.218 [2024-05-14 04:30:00.792979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.218 [2024-05-14 04:30:00.792988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:72720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.218 [2024-05-14 04:30:00.792996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.218 [2024-05-14 04:30:00.793006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:72728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.218 [2024-05-14 04:30:00.793013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.218 [2024-05-14 04:30:00.793023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:72736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:57.218 [2024-05-14 04:30:00.793031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.218 [2024-05-14 04:30:00.793040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:72744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.218 [2024-05-14 04:30:00.793048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.218 [2024-05-14 04:30:00.793057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:72752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:57.218 [2024-05-14 04:30:00.793065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.218 [2024-05-14 04:30:00.793074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:72760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:57.218 [2024-05-14 04:30:00.793085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.218 [2024-05-14 04:30:00.793095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:72768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.218 [2024-05-14 04:30:00.793102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.218 [2024-05-14 04:30:00.793112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:72776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:57.218 [2024-05-14 04:30:00.793120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.218 [2024-05-14 04:30:00.793131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:72096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.218 [2024-05-14 04:30:00.793138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.218 [2024-05-14 04:30:00.793148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:72136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.218 [2024-05-14 04:30:00.793156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.218 [2024-05-14 04:30:00.793166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:72152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.218 [2024-05-14 04:30:00.793173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.218 [2024-05-14 04:30:00.793183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:72160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.218 [2024-05-14 04:30:00.793195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.218 [2024-05-14 04:30:00.793205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:72168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.218 [2024-05-14 04:30:00.793213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.218 [2024-05-14 04:30:00.793222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:72176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.218 [2024-05-14 04:30:00.793230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.218 [2024-05-14 04:30:00.793247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:72192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.218 [2024-05-14 04:30:00.793254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.218 [2024-05-14 04:30:00.793264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:72224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.218 [2024-05-14 04:30:00.793272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.218 [2024-05-14 04:30:00.793282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:72784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:57.218 [2024-05-14 04:30:00.793290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.218 [2024-05-14 04:30:00.793300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:72792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:57.218 [2024-05-14 04:30:00.793308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.218 [2024-05-14 04:30:00.793317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:72800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.218 [2024-05-14 04:30:00.793325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.218 [2024-05-14 04:30:00.793335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:72808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.218 [2024-05-14 04:30:00.793342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.218 [2024-05-14 04:30:00.793352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:72816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:57.218 [2024-05-14 04:30:00.793360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.218 [2024-05-14 04:30:00.793370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:72824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.218 [2024-05-14 04:30:00.793378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.218 [2024-05-14 04:30:00.793388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:72832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.218 [2024-05-14 04:30:00.793395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.218 [2024-05-14 04:30:00.793406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:72840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.218 [2024-05-14 04:30:00.793413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.218 [2024-05-14 04:30:00.793423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:72848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.218 [2024-05-14 04:30:00.793431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.218 [2024-05-14 04:30:00.793441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:72856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:57.218 [2024-05-14 04:30:00.793449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.218 [2024-05-14 04:30:00.793458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:72232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.218 [2024-05-14 04:30:00.793466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.218 [2024-05-14 04:30:00.793476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:72240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.218 [2024-05-14 04:30:00.793484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.218 [2024-05-14 04:30:00.793494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:72256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.218 [2024-05-14 04:30:00.793503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.218 [2024-05-14 04:30:00.793512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:72264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.218 [2024-05-14 04:30:00.793520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.218 [2024-05-14 04:30:00.793529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:72272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.218 [2024-05-14 04:30:00.793537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.218 [2024-05-14 04:30:00.793547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:72280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.218 [2024-05-14 04:30:00.793555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.218 [2024-05-14 04:30:00.793565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:72296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.218 [2024-05-14 04:30:00.793572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.218 [2024-05-14 04:30:00.793582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:72312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.218 [2024-05-14 04:30:00.793591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.218 [2024-05-14 04:30:00.793600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:72864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.218 [2024-05-14 04:30:00.793607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.218 [2024-05-14 04:30:00.793617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:72872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.218 [2024-05-14 04:30:00.793625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.218 [2024-05-14 04:30:00.793634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:72880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:57.218 [2024-05-14 04:30:00.793642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.218 [2024-05-14 04:30:00.793651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:72888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.218 [2024-05-14 04:30:00.793659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.218 [2024-05-14 04:30:00.793668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:72896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:57.218 [2024-05-14 04:30:00.793676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.218 [2024-05-14 04:30:00.793685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:72904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:57.218 [2024-05-14 04:30:00.793692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.219 [2024-05-14 04:30:00.793702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:72912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.219 [2024-05-14 04:30:00.793710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.219 [2024-05-14 04:30:00.793719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:72920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:57.219 [2024-05-14 04:30:00.793726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.219 [2024-05-14 04:30:00.793736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:72928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:57.219 [2024-05-14 04:30:00.793743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.219 [2024-05-14 04:30:00.793753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:72936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:57.219 [2024-05-14 04:30:00.793760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.219 [2024-05-14 04:30:00.793770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:72944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:57.219 [2024-05-14 04:30:00.793778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.219 [2024-05-14 04:30:00.793787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:72952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.219 [2024-05-14 04:30:00.793795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.219 [2024-05-14 04:30:00.793811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:72960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.219 [2024-05-14 04:30:00.793819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.219 [2024-05-14 04:30:00.793828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:72968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:57.219 [2024-05-14 04:30:00.793836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.219 [2024-05-14 04:30:00.793845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:72976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:57.219 [2024-05-14 04:30:00.793853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.219 [2024-05-14 04:30:00.793863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:72984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.219 [2024-05-14 04:30:00.793870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.219 [2024-05-14 04:30:00.793879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:72992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:57.219 [2024-05-14 04:30:00.793887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.219 [2024-05-14 04:30:00.793897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:72328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.219 [2024-05-14 04:30:00.793905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.219 [2024-05-14 04:30:00.793915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:72336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.219 [2024-05-14 04:30:00.793922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.219 [2024-05-14 04:30:00.793932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:72352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.219 [2024-05-14 04:30:00.793940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.219 [2024-05-14 04:30:00.793949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:72360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.219 [2024-05-14 04:30:00.793957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.219 [2024-05-14 04:30:00.793967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:72368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.219 [2024-05-14 04:30:00.793975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.219 [2024-05-14 04:30:00.793985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:72392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.219 [2024-05-14 04:30:00.793993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.219 [2024-05-14 04:30:00.794004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:72408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.219 [2024-05-14 04:30:00.794012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.219 [2024-05-14 04:30:00.794021] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x613000004640 is same with the state(5) to be set 00:32:57.219 [2024-05-14 04:30:00.794033] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:57.219 [2024-05-14 04:30:00.794043] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:57.219 [2024-05-14 04:30:00.794052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:72440 len:8 PRP1 0x0 PRP2 0x0 00:32:57.219 [2024-05-14 04:30:00.794062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.219 [2024-05-14 04:30:00.794197] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x613000004640 was disconnected and freed. reset controller. 00:32:57.219 [2024-05-14 04:30:00.794210] bdev_nvme.c:1843:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4421 to 10.0.0.2:4422 00:32:57.219 [2024-05-14 04:30:00.794241] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:32:57.219 [2024-05-14 04:30:00.794255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.219 [2024-05-14 04:30:00.794268] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:32:57.219 [2024-05-14 04:30:00.794279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.219 [2024-05-14 04:30:00.794288] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:32:57.219 [2024-05-14 04:30:00.794296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.219 [2024-05-14 04:30:00.794304] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:32:57.219 [2024-05-14 04:30:00.794313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.219 [2024-05-14 04:30:00.794321] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:57.219 [2024-05-14 04:30:00.795988] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:57.219 [2024-05-14 04:30:00.796019] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x613000003300 (9): Bad file descriptor 00:32:57.219 [2024-05-14 04:30:00.818410] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:32:57.219 [2024-05-14 04:30:05.105933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:45280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.219 [2024-05-14 04:30:05.105984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.219 [2024-05-14 04:30:05.106012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:45296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.219 [2024-05-14 04:30:05.106021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.219 [2024-05-14 04:30:05.106082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:45304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.219 [2024-05-14 04:30:05.106091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.219 [2024-05-14 04:30:05.106102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:45312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.219 [2024-05-14 04:30:05.106110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.219 [2024-05-14 04:30:05.106119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:45320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.219 [2024-05-14 04:30:05.106131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.219 [2024-05-14 04:30:05.106141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:45328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.219 [2024-05-14 04:30:05.106149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.219 [2024-05-14 04:30:05.106159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:45336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.219 [2024-05-14 04:30:05.106167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.219 [2024-05-14 04:30:05.106177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:45360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.219 [2024-05-14 04:30:05.106192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.219 [2024-05-14 04:30:05.106202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:45368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.219 [2024-05-14 04:30:05.106210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.219 [2024-05-14 04:30:05.106219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:45384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.219 [2024-05-14 04:30:05.106227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.219 [2024-05-14 04:30:05.106237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:45392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.219 [2024-05-14 04:30:05.106244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.219 [2024-05-14 04:30:05.106256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:44824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.219 [2024-05-14 04:30:05.106265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.219 [2024-05-14 04:30:05.106277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:44840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.219 [2024-05-14 04:30:05.106285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.219 [2024-05-14 04:30:05.106295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:44864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.219 [2024-05-14 04:30:05.106303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.219 [2024-05-14 04:30:05.106314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:44880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.220 [2024-05-14 04:30:05.106323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.220 [2024-05-14 04:30:05.106333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:44888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.220 [2024-05-14 04:30:05.106341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.220 [2024-05-14 04:30:05.106352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:44896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.220 [2024-05-14 04:30:05.106360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.220 [2024-05-14 04:30:05.106372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:44912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.220 [2024-05-14 04:30:05.106380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.220 [2024-05-14 04:30:05.106390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:44936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.220 [2024-05-14 04:30:05.106397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.220 [2024-05-14 04:30:05.106406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:45416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.220 [2024-05-14 04:30:05.106414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.220 [2024-05-14 04:30:05.106424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:45424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.220 [2024-05-14 04:30:05.106431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.220 [2024-05-14 04:30:05.106441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:45432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.220 [2024-05-14 04:30:05.106449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.220 [2024-05-14 04:30:05.106459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:45440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.220 [2024-05-14 04:30:05.106466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.220 [2024-05-14 04:30:05.106476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:45448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.220 [2024-05-14 04:30:05.106484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.220 [2024-05-14 04:30:05.106494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:45456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.220 [2024-05-14 04:30:05.106501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.220 [2024-05-14 04:30:05.106511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:45472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.220 [2024-05-14 04:30:05.106519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.220 [2024-05-14 04:30:05.106529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:45488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.220 [2024-05-14 04:30:05.106537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.220 [2024-05-14 04:30:05.106547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:45496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.220 [2024-05-14 04:30:05.106554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.220 [2024-05-14 04:30:05.106564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:45512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.220 [2024-05-14 04:30:05.106571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.220 [2024-05-14 04:30:05.106582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:45528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.220 [2024-05-14 04:30:05.106591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.220 [2024-05-14 04:30:05.106601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:45544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:57.220 [2024-05-14 04:30:05.106609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.220 [2024-05-14 04:30:05.106619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:45552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.220 [2024-05-14 04:30:05.106627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.220 [2024-05-14 04:30:05.106637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:45560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:57.220 [2024-05-14 04:30:05.106646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.220 [2024-05-14 04:30:05.106656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:45568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:57.220 [2024-05-14 04:30:05.106664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.220 [2024-05-14 04:30:05.106673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:45576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:57.220 [2024-05-14 04:30:05.106681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.220 [2024-05-14 04:30:05.106691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:45584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.220 [2024-05-14 04:30:05.106699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.220 [2024-05-14 04:30:05.106709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:44944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.220 [2024-05-14 04:30:05.106717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.220 [2024-05-14 04:30:05.106726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:44952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.220 [2024-05-14 04:30:05.106735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.220 [2024-05-14 04:30:05.106744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:44992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.220 [2024-05-14 04:30:05.106752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.220 [2024-05-14 04:30:05.106761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:45008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.220 [2024-05-14 04:30:05.106769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.220 [2024-05-14 04:30:05.106779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:45032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.220 [2024-05-14 04:30:05.106787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.220 [2024-05-14 04:30:05.106797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:45056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.220 [2024-05-14 04:30:05.106804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.220 [2024-05-14 04:30:05.106814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:45072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.220 [2024-05-14 04:30:05.106823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.220 [2024-05-14 04:30:05.106833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:45080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.220 [2024-05-14 04:30:05.106842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.220 [2024-05-14 04:30:05.106852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:45592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:57.220 [2024-05-14 04:30:05.106860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.220 [2024-05-14 04:30:05.106870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:45600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.220 [2024-05-14 04:30:05.106880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.220 [2024-05-14 04:30:05.106891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:45608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.220 [2024-05-14 04:30:05.106899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.220 [2024-05-14 04:30:05.106910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:45616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.220 [2024-05-14 04:30:05.106918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.220 [2024-05-14 04:30:05.106927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:45624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:57.220 [2024-05-14 04:30:05.106935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.220 [2024-05-14 04:30:05.106944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:45632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:57.220 [2024-05-14 04:30:05.106954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.220 [2024-05-14 04:30:05.106963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:45640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:57.220 [2024-05-14 04:30:05.106971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.220 [2024-05-14 04:30:05.106981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:45648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.220 [2024-05-14 04:30:05.106990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.220 [2024-05-14 04:30:05.107000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:45656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.220 [2024-05-14 04:30:05.107008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.220 [2024-05-14 04:30:05.107019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:45664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.220 [2024-05-14 04:30:05.107028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.220 [2024-05-14 04:30:05.107037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:45672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.220 [2024-05-14 04:30:05.107046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.220 [2024-05-14 04:30:05.107060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:45680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:57.220 [2024-05-14 04:30:05.107068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.221 [2024-05-14 04:30:05.107077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:45688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.221 [2024-05-14 04:30:05.107089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.221 [2024-05-14 04:30:05.107099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:45696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.221 [2024-05-14 04:30:05.107107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.221 [2024-05-14 04:30:05.107121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:45704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.221 [2024-05-14 04:30:05.107130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.221 [2024-05-14 04:30:05.107141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:45088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.221 [2024-05-14 04:30:05.107149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.221 [2024-05-14 04:30:05.107159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:45104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.221 [2024-05-14 04:30:05.107168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.221 [2024-05-14 04:30:05.107178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:45112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.221 [2024-05-14 04:30:05.107192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.221 [2024-05-14 04:30:05.107202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:45128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.221 [2024-05-14 04:30:05.107210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.221 [2024-05-14 04:30:05.107221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:45160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.221 [2024-05-14 04:30:05.107230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.221 [2024-05-14 04:30:05.107240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:45176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.221 [2024-05-14 04:30:05.107249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.221 [2024-05-14 04:30:05.107260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:45200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.221 [2024-05-14 04:30:05.107268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.221 [2024-05-14 04:30:05.107284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:45232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.221 [2024-05-14 04:30:05.107291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.221 [2024-05-14 04:30:05.107301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:45712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:57.221 [2024-05-14 04:30:05.107312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.221 [2024-05-14 04:30:05.107322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:45720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.221 [2024-05-14 04:30:05.107330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.221 [2024-05-14 04:30:05.107340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:45728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:57.221 [2024-05-14 04:30:05.107351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.221 [2024-05-14 04:30:05.107362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:45736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:57.221 [2024-05-14 04:30:05.107370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.221 [2024-05-14 04:30:05.107380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:45744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.221 [2024-05-14 04:30:05.107388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.221 [2024-05-14 04:30:05.107397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:45752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.221 [2024-05-14 04:30:05.107405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.221 [2024-05-14 04:30:05.107415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:45760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:57.221 [2024-05-14 04:30:05.107422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.221 [2024-05-14 04:30:05.107432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:45768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.221 [2024-05-14 04:30:05.107440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.221 [2024-05-14 04:30:05.107449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:45776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:57.221 [2024-05-14 04:30:05.107457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.221 [2024-05-14 04:30:05.107467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:45784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.221 [2024-05-14 04:30:05.107474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.221 [2024-05-14 04:30:05.107484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:45792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.221 [2024-05-14 04:30:05.107492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.221 [2024-05-14 04:30:05.107502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:45800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.221 [2024-05-14 04:30:05.107509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.221 [2024-05-14 04:30:05.107519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:45808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.221 [2024-05-14 04:30:05.107527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.221 [2024-05-14 04:30:05.107538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:45816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.221 [2024-05-14 04:30:05.107546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.221 [2024-05-14 04:30:05.107556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:45824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.221 [2024-05-14 04:30:05.107564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.221 [2024-05-14 04:30:05.107574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:45832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:57.221 [2024-05-14 04:30:05.107581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.221 [2024-05-14 04:30:05.107590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:45840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:57.221 [2024-05-14 04:30:05.107599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.221 [2024-05-14 04:30:05.107609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:45848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.221 [2024-05-14 04:30:05.107622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.221 [2024-05-14 04:30:05.107632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:45856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.221 [2024-05-14 04:30:05.107647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.221 [2024-05-14 04:30:05.107657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:45864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:57.221 [2024-05-14 04:30:05.107665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.221 [2024-05-14 04:30:05.107675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:45872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:57.221 [2024-05-14 04:30:05.107683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.221 [2024-05-14 04:30:05.107693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:45880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.221 [2024-05-14 04:30:05.107701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.221 [2024-05-14 04:30:05.107710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:45888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:57.221 [2024-05-14 04:30:05.107718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.221 [2024-05-14 04:30:05.107727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:45896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.221 [2024-05-14 04:30:05.107736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.222 [2024-05-14 04:30:05.107746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:45904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.222 [2024-05-14 04:30:05.107753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.222 [2024-05-14 04:30:05.107765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:45912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.222 [2024-05-14 04:30:05.107776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.222 [2024-05-14 04:30:05.107785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:45920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.222 [2024-05-14 04:30:05.107793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.222 [2024-05-14 04:30:05.107803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:45928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.222 [2024-05-14 04:30:05.107812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.222 [2024-05-14 04:30:05.107822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:45936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:57.222 [2024-05-14 04:30:05.107830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.222 [2024-05-14 04:30:05.107840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:45944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:57.222 [2024-05-14 04:30:05.107849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.222 [2024-05-14 04:30:05.107860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:45952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.222 [2024-05-14 04:30:05.107868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.222 [2024-05-14 04:30:05.107878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:45960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.222 [2024-05-14 04:30:05.107887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.222 [2024-05-14 04:30:05.107896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:45968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.222 [2024-05-14 04:30:05.107904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.222 [2024-05-14 04:30:05.107914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:45976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:57.222 [2024-05-14 04:30:05.107922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.222 [2024-05-14 04:30:05.107931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:45984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:57.222 [2024-05-14 04:30:05.107939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.222 [2024-05-14 04:30:05.107948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:45992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:57.222 [2024-05-14 04:30:05.107956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.222 [2024-05-14 04:30:05.107966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:46000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.222 [2024-05-14 04:30:05.107973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.222 [2024-05-14 04:30:05.107983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:46008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.222 [2024-05-14 04:30:05.107991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.222 [2024-05-14 04:30:05.108001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:46016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:57.222 [2024-05-14 04:30:05.108009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.222 [2024-05-14 04:30:05.108019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:46024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.222 [2024-05-14 04:30:05.108027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.222 [2024-05-14 04:30:05.108037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:46032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.222 [2024-05-14 04:30:05.108045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.222 [2024-05-14 04:30:05.108054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:46040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:57.222 [2024-05-14 04:30:05.108061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.222 [2024-05-14 04:30:05.108071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:46048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:57.222 [2024-05-14 04:30:05.108079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.222 [2024-05-14 04:30:05.108088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:46056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:57.222 [2024-05-14 04:30:05.108096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.222 [2024-05-14 04:30:05.108105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:46064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:57.222 [2024-05-14 04:30:05.108113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.222 [2024-05-14 04:30:05.108123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:45240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.222 [2024-05-14 04:30:05.108131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.222 [2024-05-14 04:30:05.108140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:45248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.222 [2024-05-14 04:30:05.108147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.222 [2024-05-14 04:30:05.108158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:45256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.222 [2024-05-14 04:30:05.108166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.222 [2024-05-14 04:30:05.108175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:45264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.222 [2024-05-14 04:30:05.108183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.222 [2024-05-14 04:30:05.108202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:45272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.222 [2024-05-14 04:30:05.108211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.222 [2024-05-14 04:30:05.108221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:45288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.222 [2024-05-14 04:30:05.108236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.222 [2024-05-14 04:30:05.108248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:45344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.222 [2024-05-14 04:30:05.108256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.222 [2024-05-14 04:30:05.108266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:45352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.222 [2024-05-14 04:30:05.108273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.222 [2024-05-14 04:30:05.108283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:45376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.222 [2024-05-14 04:30:05.108291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.222 [2024-05-14 04:30:05.108301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:45400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.222 [2024-05-14 04:30:05.108309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.222 [2024-05-14 04:30:05.108319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:45408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.222 [2024-05-14 04:30:05.108327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.222 [2024-05-14 04:30:05.108337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:45464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.222 [2024-05-14 04:30:05.108345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.222 [2024-05-14 04:30:05.108355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:45480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.222 [2024-05-14 04:30:05.108363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.222 [2024-05-14 04:30:05.108373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:45504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.222 [2024-05-14 04:30:05.108381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.222 [2024-05-14 04:30:05.108390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:45520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.222 [2024-05-14 04:30:05.108398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.222 [2024-05-14 04:30:05.108408] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x613000004d40 is same with the state(5) to be set 00:32:57.222 [2024-05-14 04:30:05.108420] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:57.222 [2024-05-14 04:30:05.108428] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:57.222 [2024-05-14 04:30:05.108438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:45536 len:8 PRP1 0x0 PRP2 0x0 00:32:57.222 [2024-05-14 04:30:05.108448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.222 [2024-05-14 04:30:05.108575] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x613000004d40 was disconnected and freed. reset controller. 00:32:57.222 [2024-05-14 04:30:05.108597] bdev_nvme.c:1843:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4422 to 10.0.0.2:4420 00:32:57.222 [2024-05-14 04:30:05.108629] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:32:57.222 [2024-05-14 04:30:05.108640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.222 [2024-05-14 04:30:05.108650] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:32:57.222 [2024-05-14 04:30:05.108659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.222 [2024-05-14 04:30:05.108668] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:32:57.223 [2024-05-14 04:30:05.108676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.223 [2024-05-14 04:30:05.108685] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:32:57.223 [2024-05-14 04:30:05.108692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.223 [2024-05-14 04:30:05.108700] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:57.223 [2024-05-14 04:30:05.110421] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:57.223 [2024-05-14 04:30:05.110453] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x613000003300 (9): Bad file descriptor 00:32:57.223 [2024-05-14 04:30:05.182961] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:32:57.223 00:32:57.223 Latency(us) 00:32:57.223 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:57.223 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:32:57.223 Verification LBA range: start 0x0 length 0x4000 00:32:57.223 NVMe0n1 : 15.01 17540.55 68.52 1303.84 0.00 6780.47 556.19 12762.27 00:32:57.223 =================================================================================================================== 00:32:57.223 Total : 17540.55 68.52 1303.84 0.00 6780.47 556.19 12762.27 00:32:57.223 Received shutdown signal, test time was about 15.000000 seconds 00:32:57.223 00:32:57.223 Latency(us) 00:32:57.223 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:57.223 =================================================================================================================== 00:32:57.223 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:32:57.223 04:30:11 -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:32:57.223 04:30:11 -- host/failover.sh@65 -- # count=3 00:32:57.223 04:30:11 -- host/failover.sh@67 -- # (( count != 3 )) 00:32:57.223 04:30:11 -- host/failover.sh@73 -- # bdevperf_pid=30471 00:32:57.223 04:30:11 -- host/failover.sh@75 -- # waitforlisten 30471 /var/tmp/bdevperf.sock 00:32:57.223 04:30:11 -- common/autotest_common.sh@819 -- # '[' -z 30471 ']' 00:32:57.223 04:30:11 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:32:57.223 04:30:11 -- host/failover.sh@72 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:32:57.223 04:30:11 -- common/autotest_common.sh@824 -- # local max_retries=100 00:32:57.223 04:30:11 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:32:57.223 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:32:57.223 04:30:11 -- common/autotest_common.sh@828 -- # xtrace_disable 00:32:57.223 04:30:11 -- common/autotest_common.sh@10 -- # set +x 00:32:58.155 04:30:12 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:32:58.155 04:30:12 -- common/autotest_common.sh@852 -- # return 0 00:32:58.155 04:30:12 -- host/failover.sh@76 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:32:58.155 [2024-05-14 04:30:12.585120] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:32:58.155 04:30:12 -- host/failover.sh@77 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:32:58.156 [2024-05-14 04:30:12.741221] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:32:58.413 04:30:12 -- host/failover.sh@78 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:32:58.673 NVMe0n1 00:32:58.673 04:30:13 -- host/failover.sh@79 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:32:58.673 00:32:58.673 04:30:13 -- host/failover.sh@80 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:32:59.244 00:32:59.244 04:30:13 -- host/failover.sh@82 -- # grep -q NVMe0 00:32:59.244 04:30:13 -- host/failover.sh@82 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:32:59.244 04:30:13 -- host/failover.sh@84 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:32:59.505 04:30:13 -- host/failover.sh@87 -- # sleep 3 00:33:02.790 04:30:16 -- host/failover.sh@88 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:33:02.790 04:30:16 -- host/failover.sh@88 -- # grep -q NVMe0 00:33:02.790 04:30:17 -- host/failover.sh@90 -- # run_test_pid=31397 00:33:02.790 04:30:17 -- host/failover.sh@92 -- # wait 31397 00:33:02.790 04:30:17 -- host/failover.sh@89 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:33:03.733 0 00:33:03.733 04:30:18 -- host/failover.sh@94 -- # cat /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/host/try.txt 00:33:03.733 [2024-05-14 04:30:11.761442] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:33:03.733 [2024-05-14 04:30:11.761600] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid30471 ] 00:33:03.733 EAL: No free 2048 kB hugepages reported on node 1 00:33:03.733 [2024-05-14 04:30:11.891768] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:03.733 [2024-05-14 04:30:11.981954] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:33:03.733 [2024-05-14 04:30:13.838987] bdev_nvme.c:1843:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:33:03.733 [2024-05-14 04:30:13.839053] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:33:03.733 [2024-05-14 04:30:13.839068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:03.733 [2024-05-14 04:30:13.839080] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:33:03.733 [2024-05-14 04:30:13.839089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:03.733 [2024-05-14 04:30:13.839098] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:33:03.733 [2024-05-14 04:30:13.839106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:03.733 [2024-05-14 04:30:13.839114] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:33:03.733 [2024-05-14 04:30:13.839122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:03.733 [2024-05-14 04:30:13.839131] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:03.733 [2024-05-14 04:30:13.839181] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:03.733 [2024-05-14 04:30:13.839206] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x613000003300 (9): Bad file descriptor 00:33:03.733 [2024-05-14 04:30:13.848099] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:33:03.733 Running I/O for 1 seconds... 00:33:03.733 00:33:03.733 Latency(us) 00:33:03.733 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:03.733 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:33:03.733 Verification LBA range: start 0x0 length 0x4000 00:33:03.733 NVMe0n1 : 1.01 17742.86 69.31 0.00 0.00 7185.56 1056.34 10761.70 00:33:03.733 =================================================================================================================== 00:33:03.733 Total : 17742.86 69.31 0.00 0.00 7185.56 1056.34 10761.70 00:33:03.733 04:30:18 -- host/failover.sh@95 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:33:03.733 04:30:18 -- host/failover.sh@95 -- # grep -q NVMe0 00:33:03.733 04:30:18 -- host/failover.sh@98 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:33:04.054 04:30:18 -- host/failover.sh@99 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:33:04.054 04:30:18 -- host/failover.sh@99 -- # grep -q NVMe0 00:33:04.054 04:30:18 -- host/failover.sh@100 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:33:04.312 04:30:18 -- host/failover.sh@101 -- # sleep 3 00:33:07.598 04:30:21 -- host/failover.sh@103 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:33:07.598 04:30:21 -- host/failover.sh@103 -- # grep -q NVMe0 00:33:07.598 04:30:21 -- host/failover.sh@108 -- # killprocess 30471 00:33:07.598 04:30:21 -- common/autotest_common.sh@926 -- # '[' -z 30471 ']' 00:33:07.598 04:30:21 -- common/autotest_common.sh@930 -- # kill -0 30471 00:33:07.598 04:30:21 -- common/autotest_common.sh@931 -- # uname 00:33:07.598 04:30:21 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:33:07.598 04:30:21 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 30471 00:33:07.598 04:30:21 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:33:07.598 04:30:21 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:33:07.598 04:30:21 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 30471' 00:33:07.598 killing process with pid 30471 00:33:07.598 04:30:21 -- common/autotest_common.sh@945 -- # kill 30471 00:33:07.598 04:30:21 -- common/autotest_common.sh@950 -- # wait 30471 00:33:07.856 04:30:22 -- host/failover.sh@110 -- # sync 00:33:07.856 04:30:22 -- host/failover.sh@111 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:33:07.856 04:30:22 -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:33:07.856 04:30:22 -- host/failover.sh@115 -- # rm -f /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/host/try.txt 00:33:07.856 04:30:22 -- host/failover.sh@116 -- # nvmftestfini 00:33:07.856 04:30:22 -- nvmf/common.sh@476 -- # nvmfcleanup 00:33:07.856 04:30:22 -- nvmf/common.sh@116 -- # sync 00:33:07.856 04:30:22 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:33:07.856 04:30:22 -- nvmf/common.sh@119 -- # set +e 00:33:07.856 04:30:22 -- nvmf/common.sh@120 -- # for i in {1..20} 00:33:07.856 04:30:22 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:33:07.856 rmmod nvme_tcp 00:33:08.116 rmmod nvme_fabrics 00:33:08.116 rmmod nvme_keyring 00:33:08.116 04:30:22 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:33:08.116 04:30:22 -- nvmf/common.sh@123 -- # set -e 00:33:08.116 04:30:22 -- nvmf/common.sh@124 -- # return 0 00:33:08.116 04:30:22 -- nvmf/common.sh@477 -- # '[' -n 26265 ']' 00:33:08.116 04:30:22 -- nvmf/common.sh@478 -- # killprocess 26265 00:33:08.116 04:30:22 -- common/autotest_common.sh@926 -- # '[' -z 26265 ']' 00:33:08.116 04:30:22 -- common/autotest_common.sh@930 -- # kill -0 26265 00:33:08.116 04:30:22 -- common/autotest_common.sh@931 -- # uname 00:33:08.116 04:30:22 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:33:08.116 04:30:22 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 26265 00:33:08.116 04:30:22 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:33:08.116 04:30:22 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:33:08.116 04:30:22 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 26265' 00:33:08.116 killing process with pid 26265 00:33:08.116 04:30:22 -- common/autotest_common.sh@945 -- # kill 26265 00:33:08.116 04:30:22 -- common/autotest_common.sh@950 -- # wait 26265 00:33:08.685 04:30:23 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:33:08.685 04:30:23 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:33:08.685 04:30:23 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:33:08.685 04:30:23 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:33:08.685 04:30:23 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:33:08.685 04:30:23 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:08.685 04:30:23 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:33:08.685 04:30:23 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:10.584 04:30:25 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:33:10.584 00:33:10.584 real 0m38.455s 00:33:10.584 user 2m1.354s 00:33:10.584 sys 0m7.386s 00:33:10.584 04:30:25 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:33:10.585 04:30:25 -- common/autotest_common.sh@10 -- # set +x 00:33:10.585 ************************************ 00:33:10.585 END TEST nvmf_failover 00:33:10.585 ************************************ 00:33:10.585 04:30:25 -- nvmf/nvmf.sh@100 -- # run_test nvmf_discovery /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:33:10.585 04:30:25 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:33:10.585 04:30:25 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:33:10.585 04:30:25 -- common/autotest_common.sh@10 -- # set +x 00:33:10.585 ************************************ 00:33:10.585 START TEST nvmf_discovery 00:33:10.585 ************************************ 00:33:10.585 04:30:25 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:33:10.843 * Looking for test storage... 00:33:10.843 * Found test storage at /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/host 00:33:10.843 04:30:25 -- host/discovery.sh@9 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/common.sh 00:33:10.843 04:30:25 -- nvmf/common.sh@7 -- # uname -s 00:33:10.843 04:30:25 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:10.843 04:30:25 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:10.843 04:30:25 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:10.843 04:30:25 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:10.843 04:30:25 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:10.843 04:30:25 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:10.843 04:30:25 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:10.843 04:30:25 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:10.843 04:30:25 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:10.843 04:30:25 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:10.843 04:30:25 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80ef6226-405e-ee11-906e-a4bf01973fda 00:33:10.843 04:30:25 -- nvmf/common.sh@18 -- # NVME_HOSTID=80ef6226-405e-ee11-906e-a4bf01973fda 00:33:10.843 04:30:25 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:10.843 04:30:25 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:10.843 04:30:25 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:33:10.843 04:30:25 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/common.sh 00:33:10.843 04:30:25 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:10.843 04:30:25 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:10.843 04:30:25 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:10.843 04:30:25 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:10.843 04:30:25 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:10.844 04:30:25 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:10.844 04:30:25 -- paths/export.sh@5 -- # export PATH 00:33:10.844 04:30:25 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:10.844 04:30:25 -- nvmf/common.sh@46 -- # : 0 00:33:10.844 04:30:25 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:33:10.844 04:30:25 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:33:10.844 04:30:25 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:33:10.844 04:30:25 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:10.844 04:30:25 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:10.844 04:30:25 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:33:10.844 04:30:25 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:33:10.844 04:30:25 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:33:10.844 04:30:25 -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:33:10.844 04:30:25 -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:33:10.844 04:30:25 -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:33:10.844 04:30:25 -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:33:10.844 04:30:25 -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:33:10.844 04:30:25 -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:33:10.844 04:30:25 -- host/discovery.sh@25 -- # nvmftestinit 00:33:10.844 04:30:25 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:33:10.844 04:30:25 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:10.844 04:30:25 -- nvmf/common.sh@436 -- # prepare_net_devs 00:33:10.844 04:30:25 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:33:10.844 04:30:25 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:33:10.844 04:30:25 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:10.844 04:30:25 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:33:10.844 04:30:25 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:10.844 04:30:25 -- nvmf/common.sh@402 -- # [[ phy-fallback != virt ]] 00:33:10.844 04:30:25 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:33:10.844 04:30:25 -- nvmf/common.sh@284 -- # xtrace_disable 00:33:10.844 04:30:25 -- common/autotest_common.sh@10 -- # set +x 00:33:16.117 04:30:30 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:33:16.117 04:30:30 -- nvmf/common.sh@290 -- # pci_devs=() 00:33:16.117 04:30:30 -- nvmf/common.sh@290 -- # local -a pci_devs 00:33:16.117 04:30:30 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:33:16.117 04:30:30 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:33:16.117 04:30:30 -- nvmf/common.sh@292 -- # pci_drivers=() 00:33:16.117 04:30:30 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:33:16.117 04:30:30 -- nvmf/common.sh@294 -- # net_devs=() 00:33:16.117 04:30:30 -- nvmf/common.sh@294 -- # local -ga net_devs 00:33:16.117 04:30:30 -- nvmf/common.sh@295 -- # e810=() 00:33:16.117 04:30:30 -- nvmf/common.sh@295 -- # local -ga e810 00:33:16.117 04:30:30 -- nvmf/common.sh@296 -- # x722=() 00:33:16.117 04:30:30 -- nvmf/common.sh@296 -- # local -ga x722 00:33:16.117 04:30:30 -- nvmf/common.sh@297 -- # mlx=() 00:33:16.117 04:30:30 -- nvmf/common.sh@297 -- # local -ga mlx 00:33:16.117 04:30:30 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:33:16.117 04:30:30 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:33:16.117 04:30:30 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:33:16.117 04:30:30 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:33:16.117 04:30:30 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:33:16.117 04:30:30 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:33:16.117 04:30:30 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:33:16.117 04:30:30 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:33:16.117 04:30:30 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:33:16.117 04:30:30 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:33:16.117 04:30:30 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:33:16.117 04:30:30 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:33:16.117 04:30:30 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:33:16.117 04:30:30 -- nvmf/common.sh@326 -- # [[ '' == mlx5 ]] 00:33:16.117 04:30:30 -- nvmf/common.sh@328 -- # [[ '' == e810 ]] 00:33:16.117 04:30:30 -- nvmf/common.sh@330 -- # [[ '' == x722 ]] 00:33:16.117 04:30:30 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:33:16.117 04:30:30 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:33:16.117 04:30:30 -- nvmf/common.sh@340 -- # echo 'Found 0000:27:00.0 (0x8086 - 0x159b)' 00:33:16.117 Found 0000:27:00.0 (0x8086 - 0x159b) 00:33:16.117 04:30:30 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:33:16.117 04:30:30 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:33:16.117 04:30:30 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:16.117 04:30:30 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:16.117 04:30:30 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:33:16.117 04:30:30 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:33:16.117 04:30:30 -- nvmf/common.sh@340 -- # echo 'Found 0000:27:00.1 (0x8086 - 0x159b)' 00:33:16.117 Found 0000:27:00.1 (0x8086 - 0x159b) 00:33:16.117 04:30:30 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:33:16.117 04:30:30 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:33:16.117 04:30:30 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:16.117 04:30:30 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:16.117 04:30:30 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:33:16.117 04:30:30 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:33:16.117 04:30:30 -- nvmf/common.sh@371 -- # [[ '' == e810 ]] 00:33:16.117 04:30:30 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:33:16.117 04:30:30 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:16.117 04:30:30 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:33:16.117 04:30:30 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:16.117 04:30:30 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:27:00.0: cvl_0_0' 00:33:16.117 Found net devices under 0000:27:00.0: cvl_0_0 00:33:16.117 04:30:30 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:33:16.117 04:30:30 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:33:16.117 04:30:30 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:16.117 04:30:30 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:33:16.117 04:30:30 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:16.117 04:30:30 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:27:00.1: cvl_0_1' 00:33:16.117 Found net devices under 0000:27:00.1: cvl_0_1 00:33:16.117 04:30:30 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:33:16.117 04:30:30 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:33:16.117 04:30:30 -- nvmf/common.sh@402 -- # is_hw=yes 00:33:16.117 04:30:30 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:33:16.117 04:30:30 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:33:16.117 04:30:30 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:33:16.117 04:30:30 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:33:16.117 04:30:30 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:33:16.117 04:30:30 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:33:16.117 04:30:30 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:33:16.117 04:30:30 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:33:16.118 04:30:30 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:33:16.118 04:30:30 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:33:16.118 04:30:30 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:33:16.118 04:30:30 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:16.118 04:30:30 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:33:16.118 04:30:30 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:33:16.118 04:30:30 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:33:16.118 04:30:30 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:33:16.118 04:30:30 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:33:16.118 04:30:30 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:33:16.118 04:30:30 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:33:16.118 04:30:30 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:33:16.377 04:30:30 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:33:16.377 04:30:30 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:33:16.377 04:30:30 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:33:16.377 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:16.377 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.669 ms 00:33:16.377 00:33:16.377 --- 10.0.0.2 ping statistics --- 00:33:16.377 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:16.377 rtt min/avg/max/mdev = 0.669/0.669/0.669/0.000 ms 00:33:16.377 04:30:30 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:33:16.377 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:16.377 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.425 ms 00:33:16.377 00:33:16.377 --- 10.0.0.1 ping statistics --- 00:33:16.377 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:16.377 rtt min/avg/max/mdev = 0.425/0.425/0.425/0.000 ms 00:33:16.377 04:30:30 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:16.377 04:30:30 -- nvmf/common.sh@410 -- # return 0 00:33:16.377 04:30:30 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:33:16.377 04:30:30 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:16.377 04:30:30 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:33:16.377 04:30:30 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:33:16.378 04:30:30 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:16.378 04:30:30 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:33:16.378 04:30:30 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:33:16.378 04:30:30 -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:33:16.378 04:30:30 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:33:16.378 04:30:30 -- common/autotest_common.sh@712 -- # xtrace_disable 00:33:16.378 04:30:30 -- common/autotest_common.sh@10 -- # set +x 00:33:16.378 04:30:30 -- nvmf/common.sh@469 -- # nvmfpid=36515 00:33:16.378 04:30:30 -- nvmf/common.sh@470 -- # waitforlisten 36515 00:33:16.378 04:30:30 -- common/autotest_common.sh@819 -- # '[' -z 36515 ']' 00:33:16.378 04:30:30 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:16.378 04:30:30 -- common/autotest_common.sh@824 -- # local max_retries=100 00:33:16.378 04:30:30 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:16.378 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:16.378 04:30:30 -- common/autotest_common.sh@828 -- # xtrace_disable 00:33:16.378 04:30:30 -- common/autotest_common.sh@10 -- # set +x 00:33:16.378 04:30:30 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:33:16.378 [2024-05-14 04:30:30.880364] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:33:16.378 [2024-05-14 04:30:30.880466] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:16.378 EAL: No free 2048 kB hugepages reported on node 1 00:33:16.638 [2024-05-14 04:30:31.004192] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:16.638 [2024-05-14 04:30:31.094754] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:33:16.638 [2024-05-14 04:30:31.094910] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:16.638 [2024-05-14 04:30:31.094924] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:16.638 [2024-05-14 04:30:31.094933] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:16.638 [2024-05-14 04:30:31.094957] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:33:17.209 04:30:31 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:33:17.209 04:30:31 -- common/autotest_common.sh@852 -- # return 0 00:33:17.209 04:30:31 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:33:17.209 04:30:31 -- common/autotest_common.sh@718 -- # xtrace_disable 00:33:17.209 04:30:31 -- common/autotest_common.sh@10 -- # set +x 00:33:17.209 04:30:31 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:17.209 04:30:31 -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:33:17.209 04:30:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:17.209 04:30:31 -- common/autotest_common.sh@10 -- # set +x 00:33:17.209 [2024-05-14 04:30:31.630142] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:17.209 04:30:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:17.209 04:30:31 -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:33:17.209 04:30:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:17.209 04:30:31 -- common/autotest_common.sh@10 -- # set +x 00:33:17.209 [2024-05-14 04:30:31.638298] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:33:17.209 04:30:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:17.209 04:30:31 -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:33:17.209 04:30:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:17.209 04:30:31 -- common/autotest_common.sh@10 -- # set +x 00:33:17.209 null0 00:33:17.209 04:30:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:17.209 04:30:31 -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:33:17.209 04:30:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:17.209 04:30:31 -- common/autotest_common.sh@10 -- # set +x 00:33:17.209 null1 00:33:17.209 04:30:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:17.209 04:30:31 -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:33:17.209 04:30:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:17.209 04:30:31 -- common/autotest_common.sh@10 -- # set +x 00:33:17.209 04:30:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:17.209 04:30:31 -- host/discovery.sh@45 -- # hostpid=36744 00:33:17.209 04:30:31 -- host/discovery.sh@46 -- # waitforlisten 36744 /tmp/host.sock 00:33:17.209 04:30:31 -- common/autotest_common.sh@819 -- # '[' -z 36744 ']' 00:33:17.209 04:30:31 -- common/autotest_common.sh@823 -- # local rpc_addr=/tmp/host.sock 00:33:17.209 04:30:31 -- common/autotest_common.sh@824 -- # local max_retries=100 00:33:17.209 04:30:31 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:33:17.209 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:33:17.209 04:30:31 -- common/autotest_common.sh@828 -- # xtrace_disable 00:33:17.209 04:30:31 -- common/autotest_common.sh@10 -- # set +x 00:33:17.209 04:30:31 -- host/discovery.sh@44 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:33:17.209 [2024-05-14 04:30:31.741754] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:33:17.209 [2024-05-14 04:30:31.741866] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid36744 ] 00:33:17.468 EAL: No free 2048 kB hugepages reported on node 1 00:33:17.468 [2024-05-14 04:30:31.857972] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:17.468 [2024-05-14 04:30:31.947709] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:33:17.468 [2024-05-14 04:30:31.947894] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:33:18.034 04:30:32 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:33:18.034 04:30:32 -- common/autotest_common.sh@852 -- # return 0 00:33:18.034 04:30:32 -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:33:18.034 04:30:32 -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:33:18.034 04:30:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:18.034 04:30:32 -- common/autotest_common.sh@10 -- # set +x 00:33:18.034 04:30:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:18.034 04:30:32 -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:33:18.034 04:30:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:18.034 04:30:32 -- common/autotest_common.sh@10 -- # set +x 00:33:18.034 04:30:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:18.034 04:30:32 -- host/discovery.sh@72 -- # notify_id=0 00:33:18.034 04:30:32 -- host/discovery.sh@78 -- # get_subsystem_names 00:33:18.034 04:30:32 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:33:18.034 04:30:32 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:33:18.034 04:30:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:18.034 04:30:32 -- host/discovery.sh@59 -- # xargs 00:33:18.034 04:30:32 -- common/autotest_common.sh@10 -- # set +x 00:33:18.034 04:30:32 -- host/discovery.sh@59 -- # sort 00:33:18.034 04:30:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:18.034 04:30:32 -- host/discovery.sh@78 -- # [[ '' == '' ]] 00:33:18.034 04:30:32 -- host/discovery.sh@79 -- # get_bdev_list 00:33:18.034 04:30:32 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:18.034 04:30:32 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:33:18.034 04:30:32 -- host/discovery.sh@55 -- # sort 00:33:18.034 04:30:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:18.034 04:30:32 -- host/discovery.sh@55 -- # xargs 00:33:18.034 04:30:32 -- common/autotest_common.sh@10 -- # set +x 00:33:18.034 04:30:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:18.034 04:30:32 -- host/discovery.sh@79 -- # [[ '' == '' ]] 00:33:18.034 04:30:32 -- host/discovery.sh@81 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:33:18.034 04:30:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:18.034 04:30:32 -- common/autotest_common.sh@10 -- # set +x 00:33:18.034 04:30:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:18.034 04:30:32 -- host/discovery.sh@82 -- # get_subsystem_names 00:33:18.034 04:30:32 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:33:18.034 04:30:32 -- host/discovery.sh@59 -- # xargs 00:33:18.034 04:30:32 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:33:18.034 04:30:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:18.034 04:30:32 -- common/autotest_common.sh@10 -- # set +x 00:33:18.034 04:30:32 -- host/discovery.sh@59 -- # sort 00:33:18.034 04:30:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:18.035 04:30:32 -- host/discovery.sh@82 -- # [[ '' == '' ]] 00:33:18.035 04:30:32 -- host/discovery.sh@83 -- # get_bdev_list 00:33:18.035 04:30:32 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:18.035 04:30:32 -- host/discovery.sh@55 -- # sort 00:33:18.035 04:30:32 -- host/discovery.sh@55 -- # xargs 00:33:18.035 04:30:32 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:33:18.035 04:30:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:18.035 04:30:32 -- common/autotest_common.sh@10 -- # set +x 00:33:18.035 04:30:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:18.035 04:30:32 -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:33:18.035 04:30:32 -- host/discovery.sh@85 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:33:18.035 04:30:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:18.035 04:30:32 -- common/autotest_common.sh@10 -- # set +x 00:33:18.293 04:30:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:18.293 04:30:32 -- host/discovery.sh@86 -- # get_subsystem_names 00:33:18.293 04:30:32 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:33:18.293 04:30:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:18.293 04:30:32 -- common/autotest_common.sh@10 -- # set +x 00:33:18.293 04:30:32 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:33:18.293 04:30:32 -- host/discovery.sh@59 -- # sort 00:33:18.293 04:30:32 -- host/discovery.sh@59 -- # xargs 00:33:18.293 04:30:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:18.293 04:30:32 -- host/discovery.sh@86 -- # [[ '' == '' ]] 00:33:18.293 04:30:32 -- host/discovery.sh@87 -- # get_bdev_list 00:33:18.293 04:30:32 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:18.293 04:30:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:18.293 04:30:32 -- common/autotest_common.sh@10 -- # set +x 00:33:18.293 04:30:32 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:33:18.293 04:30:32 -- host/discovery.sh@55 -- # xargs 00:33:18.293 04:30:32 -- host/discovery.sh@55 -- # sort 00:33:18.293 04:30:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:18.293 04:30:32 -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:33:18.293 04:30:32 -- host/discovery.sh@91 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:33:18.293 04:30:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:18.293 04:30:32 -- common/autotest_common.sh@10 -- # set +x 00:33:18.293 [2024-05-14 04:30:32.722529] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:18.293 04:30:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:18.293 04:30:32 -- host/discovery.sh@92 -- # get_subsystem_names 00:33:18.293 04:30:32 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:33:18.293 04:30:32 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:33:18.293 04:30:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:18.293 04:30:32 -- host/discovery.sh@59 -- # xargs 00:33:18.293 04:30:32 -- common/autotest_common.sh@10 -- # set +x 00:33:18.293 04:30:32 -- host/discovery.sh@59 -- # sort 00:33:18.293 04:30:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:18.293 04:30:32 -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:33:18.293 04:30:32 -- host/discovery.sh@93 -- # get_bdev_list 00:33:18.293 04:30:32 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:33:18.293 04:30:32 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:18.293 04:30:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:18.293 04:30:32 -- host/discovery.sh@55 -- # sort 00:33:18.293 04:30:32 -- common/autotest_common.sh@10 -- # set +x 00:33:18.293 04:30:32 -- host/discovery.sh@55 -- # xargs 00:33:18.293 04:30:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:18.293 04:30:32 -- host/discovery.sh@93 -- # [[ '' == '' ]] 00:33:18.293 04:30:32 -- host/discovery.sh@94 -- # get_notification_count 00:33:18.293 04:30:32 -- host/discovery.sh@74 -- # jq '. | length' 00:33:18.293 04:30:32 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:33:18.293 04:30:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:18.293 04:30:32 -- common/autotest_common.sh@10 -- # set +x 00:33:18.293 04:30:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:18.293 04:30:32 -- host/discovery.sh@74 -- # notification_count=0 00:33:18.293 04:30:32 -- host/discovery.sh@75 -- # notify_id=0 00:33:18.293 04:30:32 -- host/discovery.sh@95 -- # [[ 0 == 0 ]] 00:33:18.293 04:30:32 -- host/discovery.sh@99 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:33:18.293 04:30:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:18.293 04:30:32 -- common/autotest_common.sh@10 -- # set +x 00:33:18.293 04:30:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:18.293 04:30:32 -- host/discovery.sh@100 -- # sleep 1 00:33:19.230 [2024-05-14 04:30:33.496022] bdev_nvme.c:6753:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:33:19.230 [2024-05-14 04:30:33.496054] bdev_nvme.c:6833:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:33:19.230 [2024-05-14 04:30:33.496073] bdev_nvme.c:6716:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:33:19.230 [2024-05-14 04:30:33.584138] bdev_nvme.c:6682:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:33:19.230 [2024-05-14 04:30:33.767419] bdev_nvme.c:6572:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:33:19.230 [2024-05-14 04:30:33.767447] bdev_nvme.c:6531:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:33:19.490 04:30:33 -- host/discovery.sh@101 -- # get_subsystem_names 00:33:19.490 04:30:33 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:33:19.490 04:30:33 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:33:19.490 04:30:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:19.490 04:30:33 -- common/autotest_common.sh@10 -- # set +x 00:33:19.490 04:30:33 -- host/discovery.sh@59 -- # sort 00:33:19.490 04:30:33 -- host/discovery.sh@59 -- # xargs 00:33:19.490 04:30:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:19.490 04:30:33 -- host/discovery.sh@101 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:19.490 04:30:33 -- host/discovery.sh@102 -- # get_bdev_list 00:33:19.490 04:30:33 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:19.490 04:30:33 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:33:19.490 04:30:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:19.490 04:30:33 -- host/discovery.sh@55 -- # sort 00:33:19.490 04:30:33 -- common/autotest_common.sh@10 -- # set +x 00:33:19.490 04:30:33 -- host/discovery.sh@55 -- # xargs 00:33:19.490 04:30:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:19.490 04:30:33 -- host/discovery.sh@102 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:33:19.490 04:30:33 -- host/discovery.sh@103 -- # get_subsystem_paths nvme0 00:33:19.490 04:30:33 -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:33:19.490 04:30:33 -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:33:19.490 04:30:33 -- host/discovery.sh@63 -- # sort -n 00:33:19.490 04:30:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:19.490 04:30:33 -- common/autotest_common.sh@10 -- # set +x 00:33:19.490 04:30:33 -- host/discovery.sh@63 -- # xargs 00:33:19.490 04:30:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:19.490 04:30:33 -- host/discovery.sh@103 -- # [[ 4420 == \4\4\2\0 ]] 00:33:19.490 04:30:33 -- host/discovery.sh@104 -- # get_notification_count 00:33:19.490 04:30:33 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:33:19.490 04:30:33 -- host/discovery.sh@74 -- # jq '. | length' 00:33:19.490 04:30:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:19.490 04:30:33 -- common/autotest_common.sh@10 -- # set +x 00:33:19.490 04:30:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:19.490 04:30:34 -- host/discovery.sh@74 -- # notification_count=1 00:33:19.490 04:30:34 -- host/discovery.sh@75 -- # notify_id=1 00:33:19.490 04:30:34 -- host/discovery.sh@105 -- # [[ 1 == 1 ]] 00:33:19.490 04:30:34 -- host/discovery.sh@108 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:33:19.490 04:30:34 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:19.490 04:30:34 -- common/autotest_common.sh@10 -- # set +x 00:33:19.490 04:30:34 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:19.490 04:30:34 -- host/discovery.sh@109 -- # sleep 1 00:33:20.865 04:30:35 -- host/discovery.sh@110 -- # get_bdev_list 00:33:20.865 04:30:35 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:20.865 04:30:35 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:33:20.865 04:30:35 -- host/discovery.sh@55 -- # sort 00:33:20.865 04:30:35 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:20.865 04:30:35 -- host/discovery.sh@55 -- # xargs 00:33:20.865 04:30:35 -- common/autotest_common.sh@10 -- # set +x 00:33:20.865 04:30:35 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:20.865 04:30:35 -- host/discovery.sh@110 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:33:20.865 04:30:35 -- host/discovery.sh@111 -- # get_notification_count 00:33:20.865 04:30:35 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:33:20.865 04:30:35 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:20.865 04:30:35 -- host/discovery.sh@74 -- # jq '. | length' 00:33:20.865 04:30:35 -- common/autotest_common.sh@10 -- # set +x 00:33:20.865 04:30:35 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:20.865 04:30:35 -- host/discovery.sh@74 -- # notification_count=1 00:33:20.865 04:30:35 -- host/discovery.sh@75 -- # notify_id=2 00:33:20.865 04:30:35 -- host/discovery.sh@112 -- # [[ 1 == 1 ]] 00:33:20.865 04:30:35 -- host/discovery.sh@116 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:33:20.865 04:30:35 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:20.865 04:30:35 -- common/autotest_common.sh@10 -- # set +x 00:33:20.865 [2024-05-14 04:30:35.115909] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:33:20.865 [2024-05-14 04:30:35.116271] bdev_nvme.c:6735:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:33:20.865 [2024-05-14 04:30:35.116307] bdev_nvme.c:6716:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:33:20.865 04:30:35 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:20.865 04:30:35 -- host/discovery.sh@117 -- # sleep 1 00:33:20.865 [2024-05-14 04:30:35.206347] bdev_nvme.c:6677:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for nvme0 00:33:21.126 [2024-05-14 04:30:35.513118] bdev_nvme.c:6572:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:33:21.126 [2024-05-14 04:30:35.513145] bdev_nvme.c:6531:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:33:21.126 [2024-05-14 04:30:35.513155] bdev_nvme.c:6531:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:33:21.698 04:30:36 -- host/discovery.sh@118 -- # get_subsystem_names 00:33:21.698 04:30:36 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:33:21.698 04:30:36 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:33:21.698 04:30:36 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:21.698 04:30:36 -- common/autotest_common.sh@10 -- # set +x 00:33:21.698 04:30:36 -- host/discovery.sh@59 -- # sort 00:33:21.698 04:30:36 -- host/discovery.sh@59 -- # xargs 00:33:21.698 04:30:36 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:21.698 04:30:36 -- host/discovery.sh@118 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:21.698 04:30:36 -- host/discovery.sh@119 -- # get_bdev_list 00:33:21.698 04:30:36 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:21.698 04:30:36 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:33:21.698 04:30:36 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:21.698 04:30:36 -- host/discovery.sh@55 -- # sort 00:33:21.698 04:30:36 -- common/autotest_common.sh@10 -- # set +x 00:33:21.698 04:30:36 -- host/discovery.sh@55 -- # xargs 00:33:21.698 04:30:36 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:21.698 04:30:36 -- host/discovery.sh@119 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:33:21.698 04:30:36 -- host/discovery.sh@120 -- # get_subsystem_paths nvme0 00:33:21.698 04:30:36 -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:33:21.698 04:30:36 -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:33:21.698 04:30:36 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:21.698 04:30:36 -- common/autotest_common.sh@10 -- # set +x 00:33:21.698 04:30:36 -- host/discovery.sh@63 -- # sort -n 00:33:21.698 04:30:36 -- host/discovery.sh@63 -- # xargs 00:33:21.698 04:30:36 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:21.698 04:30:36 -- host/discovery.sh@120 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:33:21.698 04:30:36 -- host/discovery.sh@121 -- # get_notification_count 00:33:21.698 04:30:36 -- host/discovery.sh@74 -- # jq '. | length' 00:33:21.698 04:30:36 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:33:21.698 04:30:36 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:21.698 04:30:36 -- common/autotest_common.sh@10 -- # set +x 00:33:21.698 04:30:36 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:21.958 04:30:36 -- host/discovery.sh@74 -- # notification_count=0 00:33:21.958 04:30:36 -- host/discovery.sh@75 -- # notify_id=2 00:33:21.958 04:30:36 -- host/discovery.sh@122 -- # [[ 0 == 0 ]] 00:33:21.958 04:30:36 -- host/discovery.sh@126 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:33:21.958 04:30:36 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:21.958 04:30:36 -- common/autotest_common.sh@10 -- # set +x 00:33:21.958 [2024-05-14 04:30:36.292979] bdev_nvme.c:6735:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:33:21.958 [2024-05-14 04:30:36.293018] bdev_nvme.c:6716:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:33:21.958 [2024-05-14 04:30:36.295172] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:33:21.958 [2024-05-14 04:30:36.295204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:21.958 [2024-05-14 04:30:36.295217] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:33:21.958 [2024-05-14 04:30:36.295226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:21.958 [2024-05-14 04:30:36.295235] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:33:21.958 [2024-05-14 04:30:36.295243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:21.958 [2024-05-14 04:30:36.295257] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:33:21.958 [2024-05-14 04:30:36.295265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:21.958 [2024-05-14 04:30:36.295274] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x613000003680 is same with the state(5) to be set 00:33:21.958 04:30:36 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:21.958 04:30:36 -- host/discovery.sh@127 -- # sleep 1 00:33:21.958 [2024-05-14 04:30:36.305157] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x613000003680 (9): Bad file descriptor 00:33:21.958 [2024-05-14 04:30:36.315170] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:33:21.958 [2024-05-14 04:30:36.315519] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.958 [2024-05-14 04:30:36.315798] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.958 [2024-05-14 04:30:36.315812] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x613000003680 with addr=10.0.0.2, port=4420 00:33:21.958 [2024-05-14 04:30:36.315823] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x613000003680 is same with the state(5) to be set 00:33:21.958 [2024-05-14 04:30:36.315837] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x613000003680 (9): Bad file descriptor 00:33:21.958 [2024-05-14 04:30:36.315849] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:33:21.958 [2024-05-14 04:30:36.315857] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:33:21.958 [2024-05-14 04:30:36.315867] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:33:21.958 [2024-05-14 04:30:36.315884] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:21.958 [2024-05-14 04:30:36.325221] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:33:21.958 [2024-05-14 04:30:36.325531] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.958 [2024-05-14 04:30:36.325833] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.958 [2024-05-14 04:30:36.325844] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x613000003680 with addr=10.0.0.2, port=4420 00:33:21.958 [2024-05-14 04:30:36.325853] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x613000003680 is same with the state(5) to be set 00:33:21.958 [2024-05-14 04:30:36.325866] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x613000003680 (9): Bad file descriptor 00:33:21.958 [2024-05-14 04:30:36.325877] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:33:21.958 [2024-05-14 04:30:36.325885] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:33:21.958 [2024-05-14 04:30:36.325893] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:33:21.958 [2024-05-14 04:30:36.325911] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:21.958 [2024-05-14 04:30:36.335260] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:33:21.958 [2024-05-14 04:30:36.335557] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.958 [2024-05-14 04:30:36.335877] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.958 [2024-05-14 04:30:36.335888] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x613000003680 with addr=10.0.0.2, port=4420 00:33:21.958 [2024-05-14 04:30:36.335898] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x613000003680 is same with the state(5) to be set 00:33:21.958 [2024-05-14 04:30:36.335915] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x613000003680 (9): Bad file descriptor 00:33:21.958 [2024-05-14 04:30:36.335926] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:33:21.958 [2024-05-14 04:30:36.335933] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:33:21.958 [2024-05-14 04:30:36.335941] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:33:21.958 [2024-05-14 04:30:36.335952] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:21.958 [2024-05-14 04:30:36.345301] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:33:21.958 [2024-05-14 04:30:36.345494] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.958 [2024-05-14 04:30:36.345932] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.958 [2024-05-14 04:30:36.345944] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x613000003680 with addr=10.0.0.2, port=4420 00:33:21.958 [2024-05-14 04:30:36.345954] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x613000003680 is same with the state(5) to be set 00:33:21.958 [2024-05-14 04:30:36.345966] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x613000003680 (9): Bad file descriptor 00:33:21.958 [2024-05-14 04:30:36.345977] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:33:21.958 [2024-05-14 04:30:36.345984] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:33:21.958 [2024-05-14 04:30:36.345992] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:33:21.958 [2024-05-14 04:30:36.346004] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:21.958 [2024-05-14 04:30:36.355348] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:33:21.958 [2024-05-14 04:30:36.355675] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.958 [2024-05-14 04:30:36.356007] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.958 [2024-05-14 04:30:36.356016] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x613000003680 with addr=10.0.0.2, port=4420 00:33:21.958 [2024-05-14 04:30:36.356024] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x613000003680 is same with the state(5) to be set 00:33:21.958 [2024-05-14 04:30:36.356036] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x613000003680 (9): Bad file descriptor 00:33:21.958 [2024-05-14 04:30:36.356052] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:33:21.958 [2024-05-14 04:30:36.356059] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:33:21.958 [2024-05-14 04:30:36.356067] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:33:21.958 [2024-05-14 04:30:36.356079] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:21.958 [2024-05-14 04:30:36.365384] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:33:21.958 [2024-05-14 04:30:36.365754] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.958 [2024-05-14 04:30:36.366181] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.958 [2024-05-14 04:30:36.366196] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x613000003680 with addr=10.0.0.2, port=4420 00:33:21.958 [2024-05-14 04:30:36.366204] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x613000003680 is same with the state(5) to be set 00:33:21.958 [2024-05-14 04:30:36.366215] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x613000003680 (9): Bad file descriptor 00:33:21.958 [2024-05-14 04:30:36.366240] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:33:21.958 [2024-05-14 04:30:36.366247] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:33:21.958 [2024-05-14 04:30:36.366254] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:33:21.958 [2024-05-14 04:30:36.366265] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:21.958 [2024-05-14 04:30:36.375418] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:33:21.958 [2024-05-14 04:30:36.375655] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.958 [2024-05-14 04:30:36.376104] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.958 [2024-05-14 04:30:36.376114] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x613000003680 with addr=10.0.0.2, port=4420 00:33:21.958 [2024-05-14 04:30:36.376123] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x613000003680 is same with the state(5) to be set 00:33:21.958 [2024-05-14 04:30:36.376135] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x613000003680 (9): Bad file descriptor 00:33:21.959 [2024-05-14 04:30:36.376145] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:33:21.959 [2024-05-14 04:30:36.376152] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:33:21.959 [2024-05-14 04:30:36.376160] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:33:21.959 [2024-05-14 04:30:36.376171] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:21.959 [2024-05-14 04:30:36.380265] bdev_nvme.c:6540:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:33:21.959 [2024-05-14 04:30:36.380291] bdev_nvme.c:6531:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:33:22.893 04:30:37 -- host/discovery.sh@128 -- # get_subsystem_names 00:33:22.893 04:30:37 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:33:22.893 04:30:37 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:22.893 04:30:37 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:33:22.893 04:30:37 -- common/autotest_common.sh@10 -- # set +x 00:33:22.893 04:30:37 -- host/discovery.sh@59 -- # sort 00:33:22.893 04:30:37 -- host/discovery.sh@59 -- # xargs 00:33:22.893 04:30:37 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:22.893 04:30:37 -- host/discovery.sh@128 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:22.893 04:30:37 -- host/discovery.sh@129 -- # get_bdev_list 00:33:22.893 04:30:37 -- host/discovery.sh@55 -- # xargs 00:33:22.894 04:30:37 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:22.894 04:30:37 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:33:22.894 04:30:37 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:22.894 04:30:37 -- host/discovery.sh@55 -- # sort 00:33:22.894 04:30:37 -- common/autotest_common.sh@10 -- # set +x 00:33:22.894 04:30:37 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:22.894 04:30:37 -- host/discovery.sh@129 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:33:22.894 04:30:37 -- host/discovery.sh@130 -- # get_subsystem_paths nvme0 00:33:22.894 04:30:37 -- host/discovery.sh@63 -- # xargs 00:33:22.894 04:30:37 -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:33:22.894 04:30:37 -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:33:22.894 04:30:37 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:22.894 04:30:37 -- host/discovery.sh@63 -- # sort -n 00:33:22.894 04:30:37 -- common/autotest_common.sh@10 -- # set +x 00:33:22.894 04:30:37 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:22.894 04:30:37 -- host/discovery.sh@130 -- # [[ 4421 == \4\4\2\1 ]] 00:33:22.894 04:30:37 -- host/discovery.sh@131 -- # get_notification_count 00:33:22.894 04:30:37 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:33:22.894 04:30:37 -- host/discovery.sh@74 -- # jq '. | length' 00:33:22.894 04:30:37 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:22.894 04:30:37 -- common/autotest_common.sh@10 -- # set +x 00:33:22.894 04:30:37 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:22.894 04:30:37 -- host/discovery.sh@74 -- # notification_count=0 00:33:22.894 04:30:37 -- host/discovery.sh@75 -- # notify_id=2 00:33:22.894 04:30:37 -- host/discovery.sh@132 -- # [[ 0 == 0 ]] 00:33:22.894 04:30:37 -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:33:22.894 04:30:37 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:22.894 04:30:37 -- common/autotest_common.sh@10 -- # set +x 00:33:22.894 04:30:37 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:22.894 04:30:37 -- host/discovery.sh@135 -- # sleep 1 00:33:24.324 04:30:38 -- host/discovery.sh@136 -- # get_subsystem_names 00:33:24.324 04:30:38 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:33:24.324 04:30:38 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:33:24.324 04:30:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:24.324 04:30:38 -- host/discovery.sh@59 -- # sort 00:33:24.324 04:30:38 -- common/autotest_common.sh@10 -- # set +x 00:33:24.324 04:30:38 -- host/discovery.sh@59 -- # xargs 00:33:24.324 04:30:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:24.324 04:30:38 -- host/discovery.sh@136 -- # [[ '' == '' ]] 00:33:24.324 04:30:38 -- host/discovery.sh@137 -- # get_bdev_list 00:33:24.324 04:30:38 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:24.324 04:30:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:24.324 04:30:38 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:33:24.324 04:30:38 -- common/autotest_common.sh@10 -- # set +x 00:33:24.324 04:30:38 -- host/discovery.sh@55 -- # sort 00:33:24.324 04:30:38 -- host/discovery.sh@55 -- # xargs 00:33:24.324 04:30:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:24.324 04:30:38 -- host/discovery.sh@137 -- # [[ '' == '' ]] 00:33:24.324 04:30:38 -- host/discovery.sh@138 -- # get_notification_count 00:33:24.324 04:30:38 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:33:24.324 04:30:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:24.324 04:30:38 -- common/autotest_common.sh@10 -- # set +x 00:33:24.324 04:30:38 -- host/discovery.sh@74 -- # jq '. | length' 00:33:24.324 04:30:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:24.324 04:30:38 -- host/discovery.sh@74 -- # notification_count=2 00:33:24.324 04:30:38 -- host/discovery.sh@75 -- # notify_id=4 00:33:24.324 04:30:38 -- host/discovery.sh@139 -- # [[ 2 == 2 ]] 00:33:24.324 04:30:38 -- host/discovery.sh@142 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:33:24.324 04:30:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:24.324 04:30:38 -- common/autotest_common.sh@10 -- # set +x 00:33:25.260 [2024-05-14 04:30:39.646502] bdev_nvme.c:6753:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:33:25.260 [2024-05-14 04:30:39.646528] bdev_nvme.c:6833:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:33:25.260 [2024-05-14 04:30:39.646547] bdev_nvme.c:6716:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:33:25.260 [2024-05-14 04:30:39.734614] bdev_nvme.c:6682:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem nvme0 00:33:25.520 [2024-05-14 04:30:40.046519] bdev_nvme.c:6572:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:33:25.520 [2024-05-14 04:30:40.046559] bdev_nvme.c:6531:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:33:25.520 04:30:40 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:25.520 04:30:40 -- host/discovery.sh@144 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:33:25.520 04:30:40 -- common/autotest_common.sh@640 -- # local es=0 00:33:25.520 04:30:40 -- common/autotest_common.sh@642 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:33:25.520 04:30:40 -- common/autotest_common.sh@628 -- # local arg=rpc_cmd 00:33:25.520 04:30:40 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:33:25.520 04:30:40 -- common/autotest_common.sh@632 -- # type -t rpc_cmd 00:33:25.520 04:30:40 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:33:25.520 04:30:40 -- common/autotest_common.sh@643 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:33:25.520 04:30:40 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:25.520 04:30:40 -- common/autotest_common.sh@10 -- # set +x 00:33:25.520 request: 00:33:25.520 { 00:33:25.520 "name": "nvme", 00:33:25.520 "trtype": "tcp", 00:33:25.520 "traddr": "10.0.0.2", 00:33:25.520 "hostnqn": "nqn.2021-12.io.spdk:test", 00:33:25.520 "adrfam": "ipv4", 00:33:25.520 "trsvcid": "8009", 00:33:25.520 "wait_for_attach": true, 00:33:25.520 "method": "bdev_nvme_start_discovery", 00:33:25.520 "req_id": 1 00:33:25.520 } 00:33:25.520 Got JSON-RPC error response 00:33:25.520 response: 00:33:25.520 { 00:33:25.520 "code": -17, 00:33:25.520 "message": "File exists" 00:33:25.520 } 00:33:25.520 04:30:40 -- common/autotest_common.sh@579 -- # [[ 1 == 0 ]] 00:33:25.520 04:30:40 -- common/autotest_common.sh@643 -- # es=1 00:33:25.520 04:30:40 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:33:25.520 04:30:40 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:33:25.520 04:30:40 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:33:25.520 04:30:40 -- host/discovery.sh@146 -- # get_discovery_ctrlrs 00:33:25.520 04:30:40 -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:33:25.520 04:30:40 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:25.520 04:30:40 -- host/discovery.sh@67 -- # jq -r '.[].name' 00:33:25.520 04:30:40 -- common/autotest_common.sh@10 -- # set +x 00:33:25.520 04:30:40 -- host/discovery.sh@67 -- # sort 00:33:25.520 04:30:40 -- host/discovery.sh@67 -- # xargs 00:33:25.520 04:30:40 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:25.780 04:30:40 -- host/discovery.sh@146 -- # [[ nvme == \n\v\m\e ]] 00:33:25.780 04:30:40 -- host/discovery.sh@147 -- # get_bdev_list 00:33:25.780 04:30:40 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:25.780 04:30:40 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:33:25.780 04:30:40 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:25.780 04:30:40 -- host/discovery.sh@55 -- # sort 00:33:25.780 04:30:40 -- host/discovery.sh@55 -- # xargs 00:33:25.780 04:30:40 -- common/autotest_common.sh@10 -- # set +x 00:33:25.780 04:30:40 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:25.780 04:30:40 -- host/discovery.sh@147 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:33:25.780 04:30:40 -- host/discovery.sh@150 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:33:25.780 04:30:40 -- common/autotest_common.sh@640 -- # local es=0 00:33:25.780 04:30:40 -- common/autotest_common.sh@642 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:33:25.780 04:30:40 -- common/autotest_common.sh@628 -- # local arg=rpc_cmd 00:33:25.780 04:30:40 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:33:25.780 04:30:40 -- common/autotest_common.sh@632 -- # type -t rpc_cmd 00:33:25.780 04:30:40 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:33:25.780 04:30:40 -- common/autotest_common.sh@643 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:33:25.780 04:30:40 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:25.780 04:30:40 -- common/autotest_common.sh@10 -- # set +x 00:33:25.780 request: 00:33:25.780 { 00:33:25.780 "name": "nvme_second", 00:33:25.780 "trtype": "tcp", 00:33:25.780 "traddr": "10.0.0.2", 00:33:25.780 "hostnqn": "nqn.2021-12.io.spdk:test", 00:33:25.780 "adrfam": "ipv4", 00:33:25.780 "trsvcid": "8009", 00:33:25.780 "wait_for_attach": true, 00:33:25.780 "method": "bdev_nvme_start_discovery", 00:33:25.780 "req_id": 1 00:33:25.780 } 00:33:25.780 Got JSON-RPC error response 00:33:25.780 response: 00:33:25.780 { 00:33:25.780 "code": -17, 00:33:25.780 "message": "File exists" 00:33:25.780 } 00:33:25.780 04:30:40 -- common/autotest_common.sh@579 -- # [[ 1 == 0 ]] 00:33:25.780 04:30:40 -- common/autotest_common.sh@643 -- # es=1 00:33:25.780 04:30:40 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:33:25.780 04:30:40 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:33:25.780 04:30:40 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:33:25.780 04:30:40 -- host/discovery.sh@152 -- # get_discovery_ctrlrs 00:33:25.780 04:30:40 -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:33:25.780 04:30:40 -- host/discovery.sh@67 -- # jq -r '.[].name' 00:33:25.780 04:30:40 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:25.780 04:30:40 -- host/discovery.sh@67 -- # sort 00:33:25.780 04:30:40 -- common/autotest_common.sh@10 -- # set +x 00:33:25.780 04:30:40 -- host/discovery.sh@67 -- # xargs 00:33:25.780 04:30:40 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:25.780 04:30:40 -- host/discovery.sh@152 -- # [[ nvme == \n\v\m\e ]] 00:33:25.780 04:30:40 -- host/discovery.sh@153 -- # get_bdev_list 00:33:25.780 04:30:40 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:25.780 04:30:40 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:25.780 04:30:40 -- common/autotest_common.sh@10 -- # set +x 00:33:25.780 04:30:40 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:33:25.780 04:30:40 -- host/discovery.sh@55 -- # sort 00:33:25.780 04:30:40 -- host/discovery.sh@55 -- # xargs 00:33:25.780 04:30:40 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:25.780 04:30:40 -- host/discovery.sh@153 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:33:25.780 04:30:40 -- host/discovery.sh@156 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:33:25.780 04:30:40 -- common/autotest_common.sh@640 -- # local es=0 00:33:25.780 04:30:40 -- common/autotest_common.sh@642 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:33:25.780 04:30:40 -- common/autotest_common.sh@628 -- # local arg=rpc_cmd 00:33:25.780 04:30:40 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:33:25.780 04:30:40 -- common/autotest_common.sh@632 -- # type -t rpc_cmd 00:33:25.780 04:30:40 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:33:25.780 04:30:40 -- common/autotest_common.sh@643 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:33:25.780 04:30:40 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:25.780 04:30:40 -- common/autotest_common.sh@10 -- # set +x 00:33:26.716 [2024-05-14 04:30:41.259338] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.716 [2024-05-14 04:30:41.259552] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.716 [2024-05-14 04:30:41.259567] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x613000006240 with addr=10.0.0.2, port=8010 00:33:26.716 [2024-05-14 04:30:41.259602] nvme_tcp.c:2596:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:33:26.716 [2024-05-14 04:30:41.259614] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:33:26.716 [2024-05-14 04:30:41.259626] bdev_nvme.c:6815:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:33:28.090 [2024-05-14 04:30:42.259422] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.090 [2024-05-14 04:30:42.259712] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.090 [2024-05-14 04:30:42.259724] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x613000006400 with addr=10.0.0.2, port=8010 00:33:28.090 [2024-05-14 04:30:42.259753] nvme_tcp.c:2596:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:33:28.090 [2024-05-14 04:30:42.259762] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:33:28.090 [2024-05-14 04:30:42.259770] bdev_nvme.c:6815:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:33:29.026 [2024-05-14 04:30:43.258911] bdev_nvme.c:6796:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] timed out while attaching discovery ctrlr 00:33:29.026 request: 00:33:29.026 { 00:33:29.026 "name": "nvme_second", 00:33:29.026 "trtype": "tcp", 00:33:29.026 "traddr": "10.0.0.2", 00:33:29.026 "hostnqn": "nqn.2021-12.io.spdk:test", 00:33:29.026 "adrfam": "ipv4", 00:33:29.026 "trsvcid": "8010", 00:33:29.026 "attach_timeout_ms": 3000, 00:33:29.026 "method": "bdev_nvme_start_discovery", 00:33:29.026 "req_id": 1 00:33:29.026 } 00:33:29.026 Got JSON-RPC error response 00:33:29.026 response: 00:33:29.026 { 00:33:29.026 "code": -110, 00:33:29.026 "message": "Connection timed out" 00:33:29.026 } 00:33:29.026 04:30:43 -- common/autotest_common.sh@579 -- # [[ 1 == 0 ]] 00:33:29.026 04:30:43 -- common/autotest_common.sh@643 -- # es=1 00:33:29.026 04:30:43 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:33:29.026 04:30:43 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:33:29.026 04:30:43 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:33:29.026 04:30:43 -- host/discovery.sh@158 -- # get_discovery_ctrlrs 00:33:29.026 04:30:43 -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:33:29.026 04:30:43 -- host/discovery.sh@67 -- # jq -r '.[].name' 00:33:29.026 04:30:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:29.026 04:30:43 -- host/discovery.sh@67 -- # sort 00:33:29.026 04:30:43 -- common/autotest_common.sh@10 -- # set +x 00:33:29.026 04:30:43 -- host/discovery.sh@67 -- # xargs 00:33:29.026 04:30:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:29.026 04:30:43 -- host/discovery.sh@158 -- # [[ nvme == \n\v\m\e ]] 00:33:29.026 04:30:43 -- host/discovery.sh@160 -- # trap - SIGINT SIGTERM EXIT 00:33:29.026 04:30:43 -- host/discovery.sh@162 -- # kill 36744 00:33:29.026 04:30:43 -- host/discovery.sh@163 -- # nvmftestfini 00:33:29.026 04:30:43 -- nvmf/common.sh@476 -- # nvmfcleanup 00:33:29.026 04:30:43 -- nvmf/common.sh@116 -- # sync 00:33:29.026 04:30:43 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:33:29.026 04:30:43 -- nvmf/common.sh@119 -- # set +e 00:33:29.026 04:30:43 -- nvmf/common.sh@120 -- # for i in {1..20} 00:33:29.026 04:30:43 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:33:29.026 rmmod nvme_tcp 00:33:29.026 rmmod nvme_fabrics 00:33:29.026 rmmod nvme_keyring 00:33:29.026 04:30:43 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:33:29.026 04:30:43 -- nvmf/common.sh@123 -- # set -e 00:33:29.026 04:30:43 -- nvmf/common.sh@124 -- # return 0 00:33:29.026 04:30:43 -- nvmf/common.sh@477 -- # '[' -n 36515 ']' 00:33:29.026 04:30:43 -- nvmf/common.sh@478 -- # killprocess 36515 00:33:29.026 04:30:43 -- common/autotest_common.sh@926 -- # '[' -z 36515 ']' 00:33:29.026 04:30:43 -- common/autotest_common.sh@930 -- # kill -0 36515 00:33:29.026 04:30:43 -- common/autotest_common.sh@931 -- # uname 00:33:29.026 04:30:43 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:33:29.026 04:30:43 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 36515 00:33:29.026 04:30:43 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:33:29.026 04:30:43 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:33:29.026 04:30:43 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 36515' 00:33:29.026 killing process with pid 36515 00:33:29.026 04:30:43 -- common/autotest_common.sh@945 -- # kill 36515 00:33:29.026 04:30:43 -- common/autotest_common.sh@950 -- # wait 36515 00:33:29.284 04:30:43 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:33:29.284 04:30:43 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:33:29.284 04:30:43 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:33:29.284 04:30:43 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:33:29.284 04:30:43 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:33:29.284 04:30:43 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:29.284 04:30:43 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:33:29.284 04:30:43 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:31.813 04:30:45 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:33:31.813 00:33:31.813 real 0m20.746s 00:33:31.813 user 0m27.491s 00:33:31.813 sys 0m5.486s 00:33:31.813 04:30:45 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:33:31.813 04:30:45 -- common/autotest_common.sh@10 -- # set +x 00:33:31.813 ************************************ 00:33:31.813 END TEST nvmf_discovery 00:33:31.813 ************************************ 00:33:31.813 04:30:45 -- nvmf/nvmf.sh@101 -- # run_test nvmf_discovery_remove_ifc /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:33:31.813 04:30:45 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:33:31.813 04:30:45 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:33:31.813 04:30:45 -- common/autotest_common.sh@10 -- # set +x 00:33:31.813 ************************************ 00:33:31.813 START TEST nvmf_discovery_remove_ifc 00:33:31.814 ************************************ 00:33:31.814 04:30:45 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:33:31.814 * Looking for test storage... 00:33:31.814 * Found test storage at /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/host 00:33:31.814 04:30:45 -- host/discovery_remove_ifc.sh@12 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/common.sh 00:33:31.814 04:30:45 -- nvmf/common.sh@7 -- # uname -s 00:33:31.814 04:30:46 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:31.814 04:30:46 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:31.814 04:30:46 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:31.814 04:30:46 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:31.814 04:30:46 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:31.814 04:30:46 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:31.814 04:30:46 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:31.814 04:30:46 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:31.814 04:30:46 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:31.814 04:30:46 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:31.814 04:30:46 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80ef6226-405e-ee11-906e-a4bf01973fda 00:33:31.814 04:30:46 -- nvmf/common.sh@18 -- # NVME_HOSTID=80ef6226-405e-ee11-906e-a4bf01973fda 00:33:31.814 04:30:46 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:31.814 04:30:46 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:31.814 04:30:46 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:33:31.814 04:30:46 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/common.sh 00:33:31.814 04:30:46 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:31.814 04:30:46 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:31.814 04:30:46 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:31.814 04:30:46 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:31.814 04:30:46 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:31.814 04:30:46 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:31.814 04:30:46 -- paths/export.sh@5 -- # export PATH 00:33:31.814 04:30:46 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:31.814 04:30:46 -- nvmf/common.sh@46 -- # : 0 00:33:31.814 04:30:46 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:33:31.814 04:30:46 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:33:31.814 04:30:46 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:33:31.814 04:30:46 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:31.814 04:30:46 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:31.814 04:30:46 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:33:31.814 04:30:46 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:33:31.814 04:30:46 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:33:31.814 04:30:46 -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:33:31.814 04:30:46 -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:33:31.814 04:30:46 -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:33:31.814 04:30:46 -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:33:31.814 04:30:46 -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:33:31.814 04:30:46 -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:33:31.814 04:30:46 -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:33:31.814 04:30:46 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:33:31.814 04:30:46 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:31.814 04:30:46 -- nvmf/common.sh@436 -- # prepare_net_devs 00:33:31.814 04:30:46 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:33:31.814 04:30:46 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:33:31.814 04:30:46 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:31.814 04:30:46 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:33:31.814 04:30:46 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:31.814 04:30:46 -- nvmf/common.sh@402 -- # [[ phy-fallback != virt ]] 00:33:31.814 04:30:46 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:33:31.814 04:30:46 -- nvmf/common.sh@284 -- # xtrace_disable 00:33:31.814 04:30:46 -- common/autotest_common.sh@10 -- # set +x 00:33:37.088 04:30:50 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:33:37.088 04:30:50 -- nvmf/common.sh@290 -- # pci_devs=() 00:33:37.088 04:30:50 -- nvmf/common.sh@290 -- # local -a pci_devs 00:33:37.088 04:30:50 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:33:37.088 04:30:50 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:33:37.088 04:30:50 -- nvmf/common.sh@292 -- # pci_drivers=() 00:33:37.088 04:30:50 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:33:37.088 04:30:50 -- nvmf/common.sh@294 -- # net_devs=() 00:33:37.088 04:30:50 -- nvmf/common.sh@294 -- # local -ga net_devs 00:33:37.088 04:30:50 -- nvmf/common.sh@295 -- # e810=() 00:33:37.088 04:30:50 -- nvmf/common.sh@295 -- # local -ga e810 00:33:37.088 04:30:50 -- nvmf/common.sh@296 -- # x722=() 00:33:37.088 04:30:50 -- nvmf/common.sh@296 -- # local -ga x722 00:33:37.088 04:30:50 -- nvmf/common.sh@297 -- # mlx=() 00:33:37.088 04:30:50 -- nvmf/common.sh@297 -- # local -ga mlx 00:33:37.088 04:30:50 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:33:37.088 04:30:50 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:33:37.088 04:30:50 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:33:37.088 04:30:50 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:33:37.088 04:30:50 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:33:37.088 04:30:50 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:33:37.088 04:30:50 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:33:37.088 04:30:50 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:33:37.088 04:30:50 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:33:37.088 04:30:50 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:33:37.088 04:30:50 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:33:37.088 04:30:50 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:33:37.088 04:30:50 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:33:37.088 04:30:50 -- nvmf/common.sh@326 -- # [[ '' == mlx5 ]] 00:33:37.088 04:30:50 -- nvmf/common.sh@328 -- # [[ '' == e810 ]] 00:33:37.088 04:30:50 -- nvmf/common.sh@330 -- # [[ '' == x722 ]] 00:33:37.088 04:30:50 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:33:37.088 04:30:50 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:33:37.088 04:30:50 -- nvmf/common.sh@340 -- # echo 'Found 0000:27:00.0 (0x8086 - 0x159b)' 00:33:37.088 Found 0000:27:00.0 (0x8086 - 0x159b) 00:33:37.088 04:30:50 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:33:37.088 04:30:50 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:33:37.088 04:30:50 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:37.088 04:30:50 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:37.088 04:30:50 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:33:37.088 04:30:50 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:33:37.088 04:30:50 -- nvmf/common.sh@340 -- # echo 'Found 0000:27:00.1 (0x8086 - 0x159b)' 00:33:37.088 Found 0000:27:00.1 (0x8086 - 0x159b) 00:33:37.088 04:30:50 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:33:37.088 04:30:50 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:33:37.088 04:30:50 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:37.088 04:30:50 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:37.088 04:30:50 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:33:37.088 04:30:50 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:33:37.088 04:30:50 -- nvmf/common.sh@371 -- # [[ '' == e810 ]] 00:33:37.088 04:30:50 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:33:37.088 04:30:50 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:37.088 04:30:50 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:33:37.088 04:30:50 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:37.088 04:30:50 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:27:00.0: cvl_0_0' 00:33:37.088 Found net devices under 0000:27:00.0: cvl_0_0 00:33:37.088 04:30:50 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:33:37.088 04:30:50 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:33:37.088 04:30:50 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:37.088 04:30:50 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:33:37.088 04:30:50 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:37.088 04:30:50 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:27:00.1: cvl_0_1' 00:33:37.088 Found net devices under 0000:27:00.1: cvl_0_1 00:33:37.088 04:30:50 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:33:37.088 04:30:50 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:33:37.088 04:30:50 -- nvmf/common.sh@402 -- # is_hw=yes 00:33:37.088 04:30:50 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:33:37.088 04:30:50 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:33:37.088 04:30:50 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:33:37.088 04:30:50 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:33:37.088 04:30:50 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:33:37.088 04:30:50 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:33:37.088 04:30:50 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:33:37.088 04:30:50 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:33:37.088 04:30:50 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:33:37.088 04:30:50 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:33:37.088 04:30:50 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:33:37.088 04:30:50 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:37.088 04:30:50 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:33:37.088 04:30:50 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:33:37.088 04:30:50 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:33:37.088 04:30:50 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:33:37.088 04:30:50 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:33:37.088 04:30:50 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:33:37.088 04:30:50 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:33:37.088 04:30:50 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:33:37.088 04:30:50 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:33:37.088 04:30:51 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:33:37.088 04:30:51 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:33:37.088 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:37.088 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.392 ms 00:33:37.088 00:33:37.088 --- 10.0.0.2 ping statistics --- 00:33:37.088 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:37.088 rtt min/avg/max/mdev = 0.392/0.392/0.392/0.000 ms 00:33:37.088 04:30:51 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:33:37.088 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:37.088 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.244 ms 00:33:37.088 00:33:37.088 --- 10.0.0.1 ping statistics --- 00:33:37.088 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:37.088 rtt min/avg/max/mdev = 0.244/0.244/0.244/0.000 ms 00:33:37.088 04:30:51 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:37.088 04:30:51 -- nvmf/common.sh@410 -- # return 0 00:33:37.088 04:30:51 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:33:37.088 04:30:51 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:37.088 04:30:51 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:33:37.088 04:30:51 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:33:37.088 04:30:51 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:37.088 04:30:51 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:33:37.088 04:30:51 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:33:37.088 04:30:51 -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:33:37.088 04:30:51 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:33:37.088 04:30:51 -- common/autotest_common.sh@712 -- # xtrace_disable 00:33:37.088 04:30:51 -- common/autotest_common.sh@10 -- # set +x 00:33:37.088 04:30:51 -- nvmf/common.sh@469 -- # nvmfpid=43016 00:33:37.089 04:30:51 -- nvmf/common.sh@470 -- # waitforlisten 43016 00:33:37.089 04:30:51 -- common/autotest_common.sh@819 -- # '[' -z 43016 ']' 00:33:37.089 04:30:51 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:37.089 04:30:51 -- common/autotest_common.sh@824 -- # local max_retries=100 00:33:37.089 04:30:51 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:37.089 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:37.089 04:30:51 -- common/autotest_common.sh@828 -- # xtrace_disable 00:33:37.089 04:30:51 -- common/autotest_common.sh@10 -- # set +x 00:33:37.089 04:30:51 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:33:37.089 [2024-05-14 04:30:51.124949] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:33:37.089 [2024-05-14 04:30:51.125050] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:37.089 EAL: No free 2048 kB hugepages reported on node 1 00:33:37.089 [2024-05-14 04:30:51.244643] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:37.089 [2024-05-14 04:30:51.335335] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:33:37.089 [2024-05-14 04:30:51.335497] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:37.089 [2024-05-14 04:30:51.335510] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:37.089 [2024-05-14 04:30:51.335519] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:37.089 [2024-05-14 04:30:51.335544] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:33:37.347 04:30:51 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:33:37.347 04:30:51 -- common/autotest_common.sh@852 -- # return 0 00:33:37.347 04:30:51 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:33:37.347 04:30:51 -- common/autotest_common.sh@718 -- # xtrace_disable 00:33:37.347 04:30:51 -- common/autotest_common.sh@10 -- # set +x 00:33:37.347 04:30:51 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:37.347 04:30:51 -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:33:37.347 04:30:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:37.347 04:30:51 -- common/autotest_common.sh@10 -- # set +x 00:33:37.347 [2024-05-14 04:30:51.856431] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:37.347 [2024-05-14 04:30:51.864633] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:33:37.347 null0 00:33:37.347 [2024-05-14 04:30:51.896538] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:37.347 04:30:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:37.347 04:30:51 -- host/discovery_remove_ifc.sh@59 -- # hostpid=43116 00:33:37.347 04:30:51 -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 43116 /tmp/host.sock 00:33:37.347 04:30:51 -- common/autotest_common.sh@819 -- # '[' -z 43116 ']' 00:33:37.347 04:30:51 -- common/autotest_common.sh@823 -- # local rpc_addr=/tmp/host.sock 00:33:37.347 04:30:51 -- common/autotest_common.sh@824 -- # local max_retries=100 00:33:37.347 04:30:51 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:33:37.347 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:33:37.347 04:30:51 -- common/autotest_common.sh@828 -- # xtrace_disable 00:33:37.347 04:30:51 -- common/autotest_common.sh@10 -- # set +x 00:33:37.347 04:30:51 -- host/discovery_remove_ifc.sh@58 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:33:37.606 [2024-05-14 04:30:51.993069] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:33:37.606 [2024-05-14 04:30:51.993175] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid43116 ] 00:33:37.606 EAL: No free 2048 kB hugepages reported on node 1 00:33:37.606 [2024-05-14 04:30:52.108867] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:37.867 [2024-05-14 04:30:52.200791] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:33:37.867 [2024-05-14 04:30:52.200980] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:33:38.126 04:30:52 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:33:38.126 04:30:52 -- common/autotest_common.sh@852 -- # return 0 00:33:38.126 04:30:52 -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:33:38.126 04:30:52 -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:33:38.126 04:30:52 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:38.126 04:30:52 -- common/autotest_common.sh@10 -- # set +x 00:33:38.126 04:30:52 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:38.126 04:30:52 -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:33:38.126 04:30:52 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:38.126 04:30:52 -- common/autotest_common.sh@10 -- # set +x 00:33:38.384 04:30:52 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:38.384 04:30:52 -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:33:38.384 04:30:52 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:38.384 04:30:52 -- common/autotest_common.sh@10 -- # set +x 00:33:39.319 [2024-05-14 04:30:53.881330] bdev_nvme.c:6753:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:33:39.319 [2024-05-14 04:30:53.881361] bdev_nvme.c:6833:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:33:39.319 [2024-05-14 04:30:53.881380] bdev_nvme.c:6716:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:33:39.578 [2024-05-14 04:30:54.013472] bdev_nvme.c:6682:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:33:39.838 [2024-05-14 04:30:54.234731] bdev_nvme.c:7542:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:33:39.838 [2024-05-14 04:30:54.234788] bdev_nvme.c:7542:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:33:39.838 [2024-05-14 04:30:54.234823] bdev_nvme.c:7542:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:33:39.838 [2024-05-14 04:30:54.234845] bdev_nvme.c:6572:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:33:39.838 [2024-05-14 04:30:54.234871] bdev_nvme.c:6531:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:33:39.838 04:30:54 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:39.838 04:30:54 -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:33:39.838 04:30:54 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:33:39.838 [2024-05-14 04:30:54.239194] bdev_nvme.c:1595:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x613000003f40 was disconnected and freed. delete nvme_qpair. 00:33:39.838 04:30:54 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:39.838 04:30:54 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:33:39.838 04:30:54 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:39.839 04:30:54 -- host/discovery_remove_ifc.sh@29 -- # sort 00:33:39.839 04:30:54 -- common/autotest_common.sh@10 -- # set +x 00:33:39.839 04:30:54 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:33:39.839 04:30:54 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:39.839 04:30:54 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:33:39.839 04:30:54 -- host/discovery_remove_ifc.sh@75 -- # ip netns exec cvl_0_0_ns_spdk ip addr del 10.0.0.2/24 dev cvl_0_0 00:33:39.839 04:30:54 -- host/discovery_remove_ifc.sh@76 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 down 00:33:39.839 04:30:54 -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:33:39.839 04:30:54 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:33:39.839 04:30:54 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:39.839 04:30:54 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:33:39.839 04:30:54 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:39.839 04:30:54 -- host/discovery_remove_ifc.sh@29 -- # sort 00:33:39.839 04:30:54 -- common/autotest_common.sh@10 -- # set +x 00:33:39.839 04:30:54 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:33:39.839 04:30:54 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:40.097 04:30:54 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:33:40.097 04:30:54 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:33:41.030 04:30:55 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:33:41.030 04:30:55 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:33:41.030 04:30:55 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:41.030 04:30:55 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:33:41.030 04:30:55 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:41.030 04:30:55 -- common/autotest_common.sh@10 -- # set +x 00:33:41.030 04:30:55 -- host/discovery_remove_ifc.sh@29 -- # sort 00:33:41.030 04:30:55 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:41.030 04:30:55 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:33:41.030 04:30:55 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:33:41.968 04:30:56 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:33:41.968 04:30:56 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:41.968 04:30:56 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:33:41.968 04:30:56 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:41.968 04:30:56 -- common/autotest_common.sh@10 -- # set +x 00:33:41.968 04:30:56 -- host/discovery_remove_ifc.sh@29 -- # sort 00:33:41.968 04:30:56 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:33:41.968 04:30:56 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:41.968 04:30:56 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:33:41.968 04:30:56 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:33:43.342 04:30:57 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:33:43.342 04:30:57 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:43.342 04:30:57 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:33:43.342 04:30:57 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:43.342 04:30:57 -- host/discovery_remove_ifc.sh@29 -- # sort 00:33:43.342 04:30:57 -- common/autotest_common.sh@10 -- # set +x 00:33:43.342 04:30:57 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:33:43.342 04:30:57 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:43.342 04:30:57 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:33:43.342 04:30:57 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:33:44.314 04:30:58 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:33:44.314 04:30:58 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:33:44.314 04:30:58 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:44.314 04:30:58 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:33:44.314 04:30:58 -- host/discovery_remove_ifc.sh@29 -- # sort 00:33:44.314 04:30:58 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:44.314 04:30:58 -- common/autotest_common.sh@10 -- # set +x 00:33:44.314 04:30:58 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:44.314 04:30:58 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:33:44.314 04:30:58 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:33:45.249 04:30:59 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:33:45.249 04:30:59 -- host/discovery_remove_ifc.sh@29 -- # sort 00:33:45.249 04:30:59 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:45.249 04:30:59 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:33:45.249 04:30:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:45.249 04:30:59 -- common/autotest_common.sh@10 -- # set +x 00:33:45.249 04:30:59 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:33:45.249 04:30:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:45.249 04:30:59 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:33:45.249 04:30:59 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:33:45.249 [2024-05-14 04:30:59.662747] /var/jenkins/workspace/dsa-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:33:45.249 [2024-05-14 04:30:59.662802] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:33:45.249 [2024-05-14 04:30:59.662817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:45.249 [2024-05-14 04:30:59.662834] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:33:45.249 [2024-05-14 04:30:59.662842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:45.249 [2024-05-14 04:30:59.662851] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:33:45.249 [2024-05-14 04:30:59.662859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:45.249 [2024-05-14 04:30:59.662868] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:33:45.249 [2024-05-14 04:30:59.662876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:45.250 [2024-05-14 04:30:59.662886] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:33:45.250 [2024-05-14 04:30:59.662894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:45.250 [2024-05-14 04:30:59.662903] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x613000003680 is same with the state(5) to be set 00:33:45.250 [2024-05-14 04:30:59.672740] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x613000003680 (9): Bad file descriptor 00:33:45.250 [2024-05-14 04:30:59.682758] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:33:46.188 04:31:00 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:33:46.188 04:31:00 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:46.188 04:31:00 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:46.188 04:31:00 -- common/autotest_common.sh@10 -- # set +x 00:33:46.188 04:31:00 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:33:46.188 04:31:00 -- host/discovery_remove_ifc.sh@29 -- # sort 00:33:46.188 04:31:00 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:33:46.188 [2024-05-14 04:31:00.690232] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:33:47.564 [2024-05-14 04:31:01.714226] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:33:47.564 [2024-05-14 04:31:01.714304] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x613000003680 with addr=10.0.0.2, port=4420 00:33:47.564 [2024-05-14 04:31:01.714340] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x613000003680 is same with the state(5) to be set 00:33:47.564 [2024-05-14 04:31:01.714977] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x613000003680 (9): Bad file descriptor 00:33:47.564 [2024-05-14 04:31:01.715016] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:47.564 [2024-05-14 04:31:01.715064] bdev_nvme.c:6504:remove_discovery_entry: *INFO*: Discovery[10.0.0.2:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 00:33:47.564 [2024-05-14 04:31:01.715108] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:33:47.564 [2024-05-14 04:31:01.715130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:47.564 [2024-05-14 04:31:01.715153] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:33:47.564 [2024-05-14 04:31:01.715167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:47.564 [2024-05-14 04:31:01.715183] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:33:47.564 [2024-05-14 04:31:01.715218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:47.564 [2024-05-14 04:31:01.715238] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:33:47.564 [2024-05-14 04:31:01.715253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:47.564 [2024-05-14 04:31:01.715270] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:33:47.564 [2024-05-14 04:31:01.715285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:47.564 [2024-05-14 04:31:01.715299] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] in failed state. 00:33:47.564 [2024-05-14 04:31:01.715410] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6130000034c0 (9): Bad file descriptor 00:33:47.564 [2024-05-14 04:31:01.716446] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:33:47.565 [2024-05-14 04:31:01.716460] nvme_ctrlr.c:1135:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] Failed to read the CC register 00:33:47.565 04:31:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:47.565 04:31:01 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:33:47.565 04:31:01 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:33:48.502 04:31:02 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:33:48.502 04:31:02 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:48.502 04:31:02 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:33:48.502 04:31:02 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:48.502 04:31:02 -- common/autotest_common.sh@10 -- # set +x 00:33:48.502 04:31:02 -- host/discovery_remove_ifc.sh@29 -- # sort 00:33:48.502 04:31:02 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:33:48.502 04:31:02 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:48.502 04:31:02 -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:33:48.502 04:31:02 -- host/discovery_remove_ifc.sh@82 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:33:48.502 04:31:02 -- host/discovery_remove_ifc.sh@83 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:33:48.502 04:31:02 -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:33:48.502 04:31:02 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:33:48.502 04:31:02 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:33:48.502 04:31:02 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:33:48.502 04:31:02 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:48.502 04:31:02 -- host/discovery_remove_ifc.sh@29 -- # sort 00:33:48.502 04:31:02 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:48.502 04:31:02 -- common/autotest_common.sh@10 -- # set +x 00:33:48.502 04:31:02 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:48.502 04:31:02 -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:33:48.502 04:31:02 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:33:49.437 [2024-05-14 04:31:03.764610] bdev_nvme.c:6753:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:33:49.437 [2024-05-14 04:31:03.764638] bdev_nvme.c:6833:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:33:49.437 [2024-05-14 04:31:03.764660] bdev_nvme.c:6716:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:33:49.437 [2024-05-14 04:31:03.852710] bdev_nvme.c:6682:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme1 00:33:49.437 04:31:03 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:33:49.437 04:31:03 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:49.437 04:31:03 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:49.437 04:31:03 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:33:49.437 04:31:03 -- common/autotest_common.sh@10 -- # set +x 00:33:49.437 04:31:03 -- host/discovery_remove_ifc.sh@29 -- # sort 00:33:49.437 04:31:03 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:33:49.437 04:31:03 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:49.437 [2024-05-14 04:31:03.911683] bdev_nvme.c:7542:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:33:49.437 [2024-05-14 04:31:03.911733] bdev_nvme.c:7542:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:33:49.437 [2024-05-14 04:31:03.911763] bdev_nvme.c:7542:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:33:49.437 [2024-05-14 04:31:03.911782] bdev_nvme.c:6572:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme1 done 00:33:49.437 [2024-05-14 04:31:03.911793] bdev_nvme.c:6531:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:33:49.437 [2024-05-14 04:31:03.920299] bdev_nvme.c:1595:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x613000004d40 was disconnected and freed. delete nvme_qpair. 00:33:49.438 04:31:03 -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:33:49.438 04:31:03 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:33:50.372 04:31:04 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:33:50.372 04:31:04 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:50.372 04:31:04 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:33:50.372 04:31:04 -- host/discovery_remove_ifc.sh@29 -- # sort 00:33:50.372 04:31:04 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:50.372 04:31:04 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:33:50.372 04:31:04 -- common/autotest_common.sh@10 -- # set +x 00:33:50.372 04:31:04 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:50.632 04:31:04 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:33:50.632 04:31:04 -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:33:50.632 04:31:04 -- host/discovery_remove_ifc.sh@90 -- # killprocess 43116 00:33:50.632 04:31:04 -- common/autotest_common.sh@926 -- # '[' -z 43116 ']' 00:33:50.632 04:31:04 -- common/autotest_common.sh@930 -- # kill -0 43116 00:33:50.632 04:31:04 -- common/autotest_common.sh@931 -- # uname 00:33:50.633 04:31:04 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:33:50.633 04:31:04 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 43116 00:33:50.633 04:31:05 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:33:50.633 04:31:05 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:33:50.633 04:31:05 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 43116' 00:33:50.633 killing process with pid 43116 00:33:50.633 04:31:05 -- common/autotest_common.sh@945 -- # kill 43116 00:33:50.633 04:31:05 -- common/autotest_common.sh@950 -- # wait 43116 00:33:50.893 04:31:05 -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:33:50.893 04:31:05 -- nvmf/common.sh@476 -- # nvmfcleanup 00:33:50.893 04:31:05 -- nvmf/common.sh@116 -- # sync 00:33:50.893 04:31:05 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:33:50.893 04:31:05 -- nvmf/common.sh@119 -- # set +e 00:33:50.893 04:31:05 -- nvmf/common.sh@120 -- # for i in {1..20} 00:33:50.893 04:31:05 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:33:50.893 rmmod nvme_tcp 00:33:50.893 rmmod nvme_fabrics 00:33:50.893 rmmod nvme_keyring 00:33:51.151 04:31:05 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:33:51.151 04:31:05 -- nvmf/common.sh@123 -- # set -e 00:33:51.151 04:31:05 -- nvmf/common.sh@124 -- # return 0 00:33:51.151 04:31:05 -- nvmf/common.sh@477 -- # '[' -n 43016 ']' 00:33:51.151 04:31:05 -- nvmf/common.sh@478 -- # killprocess 43016 00:33:51.151 04:31:05 -- common/autotest_common.sh@926 -- # '[' -z 43016 ']' 00:33:51.151 04:31:05 -- common/autotest_common.sh@930 -- # kill -0 43016 00:33:51.151 04:31:05 -- common/autotest_common.sh@931 -- # uname 00:33:51.151 04:31:05 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:33:51.151 04:31:05 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 43016 00:33:51.151 04:31:05 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:33:51.151 04:31:05 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:33:51.151 04:31:05 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 43016' 00:33:51.151 killing process with pid 43016 00:33:51.151 04:31:05 -- common/autotest_common.sh@945 -- # kill 43016 00:33:51.151 04:31:05 -- common/autotest_common.sh@950 -- # wait 43016 00:33:51.407 04:31:05 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:33:51.407 04:31:05 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:33:51.407 04:31:05 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:33:51.407 04:31:05 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:33:51.407 04:31:05 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:33:51.407 04:31:05 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:51.407 04:31:05 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:33:51.407 04:31:05 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:53.937 04:31:08 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:33:53.937 00:33:53.937 real 0m22.100s 00:33:53.937 user 0m27.876s 00:33:53.937 sys 0m5.026s 00:33:53.937 04:31:08 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:33:53.937 04:31:08 -- common/autotest_common.sh@10 -- # set +x 00:33:53.937 ************************************ 00:33:53.937 END TEST nvmf_discovery_remove_ifc 00:33:53.937 ************************************ 00:33:53.937 04:31:08 -- nvmf/nvmf.sh@105 -- # [[ tcp == \t\c\p ]] 00:33:53.937 04:31:08 -- nvmf/nvmf.sh@106 -- # run_test nvmf_digest /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:33:53.937 04:31:08 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:33:53.937 04:31:08 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:33:53.937 04:31:08 -- common/autotest_common.sh@10 -- # set +x 00:33:53.937 ************************************ 00:33:53.937 START TEST nvmf_digest 00:33:53.937 ************************************ 00:33:53.937 04:31:08 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:33:53.937 * Looking for test storage... 00:33:53.937 * Found test storage at /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/host 00:33:53.937 04:31:08 -- host/digest.sh@12 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/common.sh 00:33:53.937 04:31:08 -- nvmf/common.sh@7 -- # uname -s 00:33:53.937 04:31:08 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:53.937 04:31:08 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:53.937 04:31:08 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:53.937 04:31:08 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:53.937 04:31:08 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:53.937 04:31:08 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:53.937 04:31:08 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:53.937 04:31:08 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:53.937 04:31:08 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:53.937 04:31:08 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:53.937 04:31:08 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80ef6226-405e-ee11-906e-a4bf01973fda 00:33:53.937 04:31:08 -- nvmf/common.sh@18 -- # NVME_HOSTID=80ef6226-405e-ee11-906e-a4bf01973fda 00:33:53.937 04:31:08 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:53.937 04:31:08 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:53.937 04:31:08 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:33:53.937 04:31:08 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/common.sh 00:33:53.937 04:31:08 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:53.937 04:31:08 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:53.937 04:31:08 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:53.937 04:31:08 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:53.938 04:31:08 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:53.938 04:31:08 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:53.938 04:31:08 -- paths/export.sh@5 -- # export PATH 00:33:53.938 04:31:08 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:53.938 04:31:08 -- nvmf/common.sh@46 -- # : 0 00:33:53.938 04:31:08 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:33:53.938 04:31:08 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:33:53.938 04:31:08 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:33:53.938 04:31:08 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:53.938 04:31:08 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:53.938 04:31:08 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:33:53.938 04:31:08 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:33:53.938 04:31:08 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:33:53.938 04:31:08 -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:33:53.938 04:31:08 -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:33:53.938 04:31:08 -- host/digest.sh@16 -- # runtime=2 00:33:53.938 04:31:08 -- host/digest.sh@130 -- # [[ tcp != \t\c\p ]] 00:33:53.938 04:31:08 -- host/digest.sh@132 -- # nvmftestinit 00:33:53.938 04:31:08 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:33:53.938 04:31:08 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:53.938 04:31:08 -- nvmf/common.sh@436 -- # prepare_net_devs 00:33:53.938 04:31:08 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:33:53.938 04:31:08 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:33:53.938 04:31:08 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:53.938 04:31:08 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:33:53.938 04:31:08 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:53.938 04:31:08 -- nvmf/common.sh@402 -- # [[ phy-fallback != virt ]] 00:33:53.938 04:31:08 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:33:53.938 04:31:08 -- nvmf/common.sh@284 -- # xtrace_disable 00:33:53.938 04:31:08 -- common/autotest_common.sh@10 -- # set +x 00:34:00.510 04:31:13 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:34:00.510 04:31:13 -- nvmf/common.sh@290 -- # pci_devs=() 00:34:00.510 04:31:13 -- nvmf/common.sh@290 -- # local -a pci_devs 00:34:00.510 04:31:13 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:34:00.510 04:31:13 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:34:00.510 04:31:13 -- nvmf/common.sh@292 -- # pci_drivers=() 00:34:00.510 04:31:13 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:34:00.510 04:31:13 -- nvmf/common.sh@294 -- # net_devs=() 00:34:00.510 04:31:13 -- nvmf/common.sh@294 -- # local -ga net_devs 00:34:00.510 04:31:13 -- nvmf/common.sh@295 -- # e810=() 00:34:00.510 04:31:13 -- nvmf/common.sh@295 -- # local -ga e810 00:34:00.510 04:31:13 -- nvmf/common.sh@296 -- # x722=() 00:34:00.510 04:31:13 -- nvmf/common.sh@296 -- # local -ga x722 00:34:00.510 04:31:13 -- nvmf/common.sh@297 -- # mlx=() 00:34:00.510 04:31:13 -- nvmf/common.sh@297 -- # local -ga mlx 00:34:00.511 04:31:13 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:34:00.511 04:31:13 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:34:00.511 04:31:13 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:34:00.511 04:31:13 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:34:00.511 04:31:13 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:34:00.511 04:31:13 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:34:00.511 04:31:13 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:34:00.511 04:31:13 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:34:00.511 04:31:13 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:34:00.511 04:31:13 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:34:00.511 04:31:13 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:34:00.511 04:31:13 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:34:00.511 04:31:13 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:34:00.511 04:31:13 -- nvmf/common.sh@326 -- # [[ '' == mlx5 ]] 00:34:00.511 04:31:13 -- nvmf/common.sh@328 -- # [[ '' == e810 ]] 00:34:00.511 04:31:13 -- nvmf/common.sh@330 -- # [[ '' == x722 ]] 00:34:00.511 04:31:13 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:34:00.511 04:31:13 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:34:00.511 04:31:13 -- nvmf/common.sh@340 -- # echo 'Found 0000:27:00.0 (0x8086 - 0x159b)' 00:34:00.511 Found 0000:27:00.0 (0x8086 - 0x159b) 00:34:00.511 04:31:13 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:34:00.511 04:31:13 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:34:00.511 04:31:13 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:00.511 04:31:13 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:00.511 04:31:13 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:34:00.511 04:31:13 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:34:00.511 04:31:13 -- nvmf/common.sh@340 -- # echo 'Found 0000:27:00.1 (0x8086 - 0x159b)' 00:34:00.511 Found 0000:27:00.1 (0x8086 - 0x159b) 00:34:00.511 04:31:13 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:34:00.511 04:31:13 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:34:00.511 04:31:13 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:00.511 04:31:13 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:00.511 04:31:13 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:34:00.511 04:31:13 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:34:00.511 04:31:13 -- nvmf/common.sh@371 -- # [[ '' == e810 ]] 00:34:00.511 04:31:13 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:34:00.511 04:31:13 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:00.511 04:31:13 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:34:00.511 04:31:13 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:00.511 04:31:13 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:27:00.0: cvl_0_0' 00:34:00.511 Found net devices under 0000:27:00.0: cvl_0_0 00:34:00.511 04:31:13 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:34:00.511 04:31:13 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:34:00.511 04:31:13 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:00.511 04:31:13 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:34:00.511 04:31:13 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:00.511 04:31:13 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:27:00.1: cvl_0_1' 00:34:00.511 Found net devices under 0000:27:00.1: cvl_0_1 00:34:00.511 04:31:13 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:34:00.511 04:31:13 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:34:00.511 04:31:13 -- nvmf/common.sh@402 -- # is_hw=yes 00:34:00.511 04:31:13 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:34:00.511 04:31:13 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:34:00.511 04:31:13 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:34:00.511 04:31:13 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:34:00.511 04:31:13 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:34:00.511 04:31:13 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:34:00.511 04:31:13 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:34:00.511 04:31:13 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:34:00.511 04:31:13 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:34:00.511 04:31:13 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:34:00.511 04:31:13 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:34:00.511 04:31:13 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:34:00.511 04:31:13 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:34:00.511 04:31:13 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:34:00.511 04:31:13 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:34:00.511 04:31:13 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:34:00.511 04:31:13 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:34:00.511 04:31:13 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:34:00.511 04:31:13 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:34:00.511 04:31:13 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:34:00.511 04:31:14 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:34:00.511 04:31:14 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:34:00.511 04:31:14 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:34:00.511 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:34:00.511 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.204 ms 00:34:00.511 00:34:00.511 --- 10.0.0.2 ping statistics --- 00:34:00.511 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:00.511 rtt min/avg/max/mdev = 0.204/0.204/0.204/0.000 ms 00:34:00.511 04:31:14 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:34:00.511 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:34:00.511 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.112 ms 00:34:00.511 00:34:00.511 --- 10.0.0.1 ping statistics --- 00:34:00.511 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:00.511 rtt min/avg/max/mdev = 0.112/0.112/0.112/0.000 ms 00:34:00.511 04:31:14 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:34:00.511 04:31:14 -- nvmf/common.sh@410 -- # return 0 00:34:00.511 04:31:14 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:34:00.511 04:31:14 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:34:00.511 04:31:14 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:34:00.511 04:31:14 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:34:00.511 04:31:14 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:34:00.511 04:31:14 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:34:00.511 04:31:14 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:34:00.511 04:31:14 -- host/digest.sh@134 -- # trap cleanup SIGINT SIGTERM EXIT 00:34:00.511 04:31:14 -- host/digest.sh@135 -- # run_test nvmf_digest_clean run_digest 00:34:00.511 04:31:14 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:34:00.511 04:31:14 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:34:00.511 04:31:14 -- common/autotest_common.sh@10 -- # set +x 00:34:00.511 ************************************ 00:34:00.511 START TEST nvmf_digest_clean 00:34:00.511 ************************************ 00:34:00.511 04:31:14 -- common/autotest_common.sh@1104 -- # run_digest 00:34:00.511 04:31:14 -- host/digest.sh@119 -- # nvmfappstart --wait-for-rpc 00:34:00.511 04:31:14 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:34:00.511 04:31:14 -- common/autotest_common.sh@712 -- # xtrace_disable 00:34:00.511 04:31:14 -- common/autotest_common.sh@10 -- # set +x 00:34:00.511 04:31:14 -- nvmf/common.sh@469 -- # nvmfpid=49934 00:34:00.511 04:31:14 -- nvmf/common.sh@470 -- # waitforlisten 49934 00:34:00.511 04:31:14 -- common/autotest_common.sh@819 -- # '[' -z 49934 ']' 00:34:00.511 04:31:14 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:00.511 04:31:14 -- common/autotest_common.sh@824 -- # local max_retries=100 00:34:00.511 04:31:14 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:34:00.511 04:31:14 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:00.511 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:00.511 04:31:14 -- common/autotest_common.sh@828 -- # xtrace_disable 00:34:00.511 04:31:14 -- common/autotest_common.sh@10 -- # set +x 00:34:00.511 [2024-05-14 04:31:14.157937] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:34:00.511 [2024-05-14 04:31:14.158047] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:00.511 EAL: No free 2048 kB hugepages reported on node 1 00:34:00.511 [2024-05-14 04:31:14.282867] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:00.511 [2024-05-14 04:31:14.374170] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:34:00.511 [2024-05-14 04:31:14.374338] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:00.511 [2024-05-14 04:31:14.374352] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:34:00.511 [2024-05-14 04:31:14.374362] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:00.511 [2024-05-14 04:31:14.374388] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:34:00.511 04:31:14 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:34:00.511 04:31:14 -- common/autotest_common.sh@852 -- # return 0 00:34:00.511 04:31:14 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:34:00.512 04:31:14 -- common/autotest_common.sh@718 -- # xtrace_disable 00:34:00.512 04:31:14 -- common/autotest_common.sh@10 -- # set +x 00:34:00.512 04:31:14 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:34:00.512 04:31:14 -- host/digest.sh@120 -- # common_target_config 00:34:00.512 04:31:14 -- host/digest.sh@43 -- # rpc_cmd 00:34:00.512 04:31:14 -- common/autotest_common.sh@551 -- # xtrace_disable 00:34:00.512 04:31:14 -- common/autotest_common.sh@10 -- # set +x 00:34:00.512 null0 00:34:00.512 [2024-05-14 04:31:15.031057] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:00.512 [2024-05-14 04:31:15.055208] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:00.512 04:31:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:34:00.512 04:31:15 -- host/digest.sh@122 -- # run_bperf randread 4096 128 00:34:00.512 04:31:15 -- host/digest.sh@77 -- # local rw bs qd 00:34:00.512 04:31:15 -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:34:00.512 04:31:15 -- host/digest.sh@80 -- # rw=randread 00:34:00.512 04:31:15 -- host/digest.sh@80 -- # bs=4096 00:34:00.512 04:31:15 -- host/digest.sh@80 -- # qd=128 00:34:00.512 04:31:15 -- host/digest.sh@82 -- # bperfpid=50050 00:34:00.512 04:31:15 -- host/digest.sh@83 -- # waitforlisten 50050 /var/tmp/bperf.sock 00:34:00.512 04:31:15 -- common/autotest_common.sh@819 -- # '[' -z 50050 ']' 00:34:00.512 04:31:15 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bperf.sock 00:34:00.512 04:31:15 -- host/digest.sh@81 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:34:00.512 04:31:15 -- common/autotest_common.sh@824 -- # local max_retries=100 00:34:00.512 04:31:15 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:34:00.512 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:34:00.512 04:31:15 -- common/autotest_common.sh@828 -- # xtrace_disable 00:34:00.512 04:31:15 -- common/autotest_common.sh@10 -- # set +x 00:34:00.770 [2024-05-14 04:31:15.128616] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:34:00.770 [2024-05-14 04:31:15.128721] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid50050 ] 00:34:00.770 EAL: No free 2048 kB hugepages reported on node 1 00:34:00.770 [2024-05-14 04:31:15.242577] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:00.770 [2024-05-14 04:31:15.332511] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:34:01.337 04:31:15 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:34:01.337 04:31:15 -- common/autotest_common.sh@852 -- # return 0 00:34:01.337 04:31:15 -- host/digest.sh@85 -- # [[ 1 -eq 1 ]] 00:34:01.337 04:31:15 -- host/digest.sh@85 -- # bperf_rpc dsa_scan_accel_module 00:34:01.337 04:31:15 -- host/digest.sh@18 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock dsa_scan_accel_module 00:34:01.597 [2024-05-14 04:31:15.949013] accel_dsa_rpc.c: 50:rpc_dsa_scan_accel_module: *NOTICE*: Enabled DSA user-mode 00:34:01.597 04:31:15 -- host/digest.sh@86 -- # bperf_rpc framework_start_init 00:34:01.597 04:31:15 -- host/digest.sh@18 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:34:08.202 04:31:22 -- host/digest.sh@88 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:34:08.202 04:31:22 -- host/digest.sh@18 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:34:08.202 nvme0n1 00:34:08.202 04:31:22 -- host/digest.sh@91 -- # bperf_py perform_tests 00:34:08.202 04:31:22 -- host/digest.sh@19 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:34:08.202 Running I/O for 2 seconds... 00:34:10.110 00:34:10.110 Latency(us) 00:34:10.110 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:10.110 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:34:10.110 nvme0n1 : 2.04 20628.63 80.58 0.00 0.00 6078.43 1690.14 42218.98 00:34:10.110 =================================================================================================================== 00:34:10.110 Total : 20628.63 80.58 0.00 0.00 6078.43 1690.14 42218.98 00:34:10.110 0 00:34:10.110 04:31:24 -- host/digest.sh@92 -- # read -r acc_module acc_executed 00:34:10.110 04:31:24 -- host/digest.sh@92 -- # get_accel_stats 00:34:10.110 04:31:24 -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:34:10.110 04:31:24 -- host/digest.sh@37 -- # jq -rc '.operations[] 00:34:10.110 | select(.opcode=="crc32c") 00:34:10.110 | "\(.module_name) \(.executed)"' 00:34:10.110 04:31:24 -- host/digest.sh@18 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:34:10.369 04:31:24 -- host/digest.sh@93 -- # [[ 1 -eq 1 ]] 00:34:10.369 04:31:24 -- host/digest.sh@93 -- # exp_module=dsa 00:34:10.369 04:31:24 -- host/digest.sh@94 -- # (( acc_executed > 0 )) 00:34:10.369 04:31:24 -- host/digest.sh@95 -- # [[ dsa == \d\s\a ]] 00:34:10.369 04:31:24 -- host/digest.sh@97 -- # killprocess 50050 00:34:10.369 04:31:24 -- common/autotest_common.sh@926 -- # '[' -z 50050 ']' 00:34:10.369 04:31:24 -- common/autotest_common.sh@930 -- # kill -0 50050 00:34:10.369 04:31:24 -- common/autotest_common.sh@931 -- # uname 00:34:10.369 04:31:24 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:34:10.369 04:31:24 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 50050 00:34:10.369 04:31:24 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:34:10.369 04:31:24 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:34:10.369 04:31:24 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 50050' 00:34:10.369 killing process with pid 50050 00:34:10.369 04:31:24 -- common/autotest_common.sh@945 -- # kill 50050 00:34:10.369 Received shutdown signal, test time was about 2.000000 seconds 00:34:10.369 00:34:10.369 Latency(us) 00:34:10.369 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:10.369 =================================================================================================================== 00:34:10.369 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:34:10.369 04:31:24 -- common/autotest_common.sh@950 -- # wait 50050 00:34:12.275 04:31:26 -- host/digest.sh@123 -- # run_bperf randread 131072 16 00:34:12.275 04:31:26 -- host/digest.sh@77 -- # local rw bs qd 00:34:12.275 04:31:26 -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:34:12.275 04:31:26 -- host/digest.sh@80 -- # rw=randread 00:34:12.275 04:31:26 -- host/digest.sh@80 -- # bs=131072 00:34:12.275 04:31:26 -- host/digest.sh@80 -- # qd=16 00:34:12.275 04:31:26 -- host/digest.sh@82 -- # bperfpid=52463 00:34:12.275 04:31:26 -- host/digest.sh@83 -- # waitforlisten 52463 /var/tmp/bperf.sock 00:34:12.275 04:31:26 -- common/autotest_common.sh@819 -- # '[' -z 52463 ']' 00:34:12.275 04:31:26 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bperf.sock 00:34:12.275 04:31:26 -- common/autotest_common.sh@824 -- # local max_retries=100 00:34:12.275 04:31:26 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:34:12.275 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:34:12.275 04:31:26 -- common/autotest_common.sh@828 -- # xtrace_disable 00:34:12.275 04:31:26 -- common/autotest_common.sh@10 -- # set +x 00:34:12.275 04:31:26 -- host/digest.sh@81 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:34:12.275 [2024-05-14 04:31:26.840916] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:34:12.275 [2024-05-14 04:31:26.841031] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid52463 ] 00:34:12.275 I/O size of 131072 is greater than zero copy threshold (65536). 00:34:12.275 Zero copy mechanism will not be used. 00:34:12.536 EAL: No free 2048 kB hugepages reported on node 1 00:34:12.536 [2024-05-14 04:31:26.954875] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:12.536 [2024-05-14 04:31:27.043444] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:34:13.108 04:31:27 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:34:13.108 04:31:27 -- common/autotest_common.sh@852 -- # return 0 00:34:13.108 04:31:27 -- host/digest.sh@85 -- # [[ 1 -eq 1 ]] 00:34:13.108 04:31:27 -- host/digest.sh@85 -- # bperf_rpc dsa_scan_accel_module 00:34:13.108 04:31:27 -- host/digest.sh@18 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock dsa_scan_accel_module 00:34:13.367 [2024-05-14 04:31:27.695962] accel_dsa_rpc.c: 50:rpc_dsa_scan_accel_module: *NOTICE*: Enabled DSA user-mode 00:34:13.367 04:31:27 -- host/digest.sh@86 -- # bperf_rpc framework_start_init 00:34:13.367 04:31:27 -- host/digest.sh@18 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:34:19.931 04:31:33 -- host/digest.sh@88 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:34:19.931 04:31:33 -- host/digest.sh@18 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:34:19.931 nvme0n1 00:34:19.931 04:31:34 -- host/digest.sh@91 -- # bperf_py perform_tests 00:34:19.931 04:31:34 -- host/digest.sh@19 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:34:19.931 I/O size of 131072 is greater than zero copy threshold (65536). 00:34:19.931 Zero copy mechanism will not be used. 00:34:19.931 Running I/O for 2 seconds... 00:34:21.849 00:34:21.849 Latency(us) 00:34:21.849 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:21.849 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:34:21.849 nvme0n1 : 2.00 5734.95 716.87 0.00 0.00 2787.82 651.05 6450.12 00:34:21.849 =================================================================================================================== 00:34:21.849 Total : 5734.95 716.87 0.00 0.00 2787.82 651.05 6450.12 00:34:21.849 0 00:34:21.849 04:31:36 -- host/digest.sh@92 -- # read -r acc_module acc_executed 00:34:21.849 04:31:36 -- host/digest.sh@92 -- # get_accel_stats 00:34:21.849 04:31:36 -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:34:21.849 04:31:36 -- host/digest.sh@37 -- # jq -rc '.operations[] 00:34:21.849 | select(.opcode=="crc32c") 00:34:21.849 | "\(.module_name) \(.executed)"' 00:34:21.849 04:31:36 -- host/digest.sh@18 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:34:21.849 04:31:36 -- host/digest.sh@93 -- # [[ 1 -eq 1 ]] 00:34:21.849 04:31:36 -- host/digest.sh@93 -- # exp_module=dsa 00:34:21.849 04:31:36 -- host/digest.sh@94 -- # (( acc_executed > 0 )) 00:34:21.849 04:31:36 -- host/digest.sh@95 -- # [[ dsa == \d\s\a ]] 00:34:21.849 04:31:36 -- host/digest.sh@97 -- # killprocess 52463 00:34:21.849 04:31:36 -- common/autotest_common.sh@926 -- # '[' -z 52463 ']' 00:34:21.849 04:31:36 -- common/autotest_common.sh@930 -- # kill -0 52463 00:34:21.849 04:31:36 -- common/autotest_common.sh@931 -- # uname 00:34:21.849 04:31:36 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:34:21.849 04:31:36 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 52463 00:34:21.849 04:31:36 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:34:21.849 04:31:36 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:34:21.849 04:31:36 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 52463' 00:34:21.849 killing process with pid 52463 00:34:21.849 04:31:36 -- common/autotest_common.sh@945 -- # kill 52463 00:34:21.849 Received shutdown signal, test time was about 2.000000 seconds 00:34:21.849 00:34:21.849 Latency(us) 00:34:21.849 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:21.849 =================================================================================================================== 00:34:21.849 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:34:21.849 04:31:36 -- common/autotest_common.sh@950 -- # wait 52463 00:34:23.756 04:31:38 -- host/digest.sh@124 -- # run_bperf randwrite 4096 128 00:34:23.756 04:31:38 -- host/digest.sh@77 -- # local rw bs qd 00:34:23.756 04:31:38 -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:34:23.756 04:31:38 -- host/digest.sh@80 -- # rw=randwrite 00:34:23.756 04:31:38 -- host/digest.sh@80 -- # bs=4096 00:34:23.756 04:31:38 -- host/digest.sh@80 -- # qd=128 00:34:23.756 04:31:38 -- host/digest.sh@82 -- # bperfpid=54588 00:34:23.756 04:31:38 -- host/digest.sh@83 -- # waitforlisten 54588 /var/tmp/bperf.sock 00:34:23.756 04:31:38 -- common/autotest_common.sh@819 -- # '[' -z 54588 ']' 00:34:23.756 04:31:38 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bperf.sock 00:34:23.756 04:31:38 -- common/autotest_common.sh@824 -- # local max_retries=100 00:34:23.756 04:31:38 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:34:23.756 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:34:23.756 04:31:38 -- common/autotest_common.sh@828 -- # xtrace_disable 00:34:23.756 04:31:38 -- common/autotest_common.sh@10 -- # set +x 00:34:23.756 04:31:38 -- host/digest.sh@81 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:34:24.015 [2024-05-14 04:31:38.382737] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:34:24.015 [2024-05-14 04:31:38.382850] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid54588 ] 00:34:24.015 EAL: No free 2048 kB hugepages reported on node 1 00:34:24.015 [2024-05-14 04:31:38.494039] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:24.015 [2024-05-14 04:31:38.582404] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:34:24.586 04:31:39 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:34:24.586 04:31:39 -- common/autotest_common.sh@852 -- # return 0 00:34:24.586 04:31:39 -- host/digest.sh@85 -- # [[ 1 -eq 1 ]] 00:34:24.586 04:31:39 -- host/digest.sh@85 -- # bperf_rpc dsa_scan_accel_module 00:34:24.586 04:31:39 -- host/digest.sh@18 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock dsa_scan_accel_module 00:34:24.847 [2024-05-14 04:31:39.230917] accel_dsa_rpc.c: 50:rpc_dsa_scan_accel_module: *NOTICE*: Enabled DSA user-mode 00:34:24.847 04:31:39 -- host/digest.sh@86 -- # bperf_rpc framework_start_init 00:34:24.847 04:31:39 -- host/digest.sh@18 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:34:31.411 04:31:45 -- host/digest.sh@88 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:34:31.411 04:31:45 -- host/digest.sh@18 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:34:31.411 nvme0n1 00:34:31.411 04:31:45 -- host/digest.sh@91 -- # bperf_py perform_tests 00:34:31.411 04:31:45 -- host/digest.sh@19 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:34:31.411 Running I/O for 2 seconds... 00:34:33.375 00:34:33.375 Latency(us) 00:34:33.375 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:33.375 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:34:33.375 nvme0n1 : 2.00 27834.96 108.73 0.00 0.00 4588.54 2129.92 7174.47 00:34:33.375 =================================================================================================================== 00:34:33.375 Total : 27834.96 108.73 0.00 0.00 4588.54 2129.92 7174.47 00:34:33.375 0 00:34:33.375 04:31:47 -- host/digest.sh@92 -- # read -r acc_module acc_executed 00:34:33.375 04:31:47 -- host/digest.sh@92 -- # get_accel_stats 00:34:33.375 04:31:47 -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:34:33.375 04:31:47 -- host/digest.sh@37 -- # jq -rc '.operations[] 00:34:33.375 | select(.opcode=="crc32c") 00:34:33.375 | "\(.module_name) \(.executed)"' 00:34:33.375 04:31:47 -- host/digest.sh@18 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:34:33.635 04:31:47 -- host/digest.sh@93 -- # [[ 1 -eq 1 ]] 00:34:33.635 04:31:47 -- host/digest.sh@93 -- # exp_module=dsa 00:34:33.635 04:31:47 -- host/digest.sh@94 -- # (( acc_executed > 0 )) 00:34:33.635 04:31:47 -- host/digest.sh@95 -- # [[ dsa == \d\s\a ]] 00:34:33.635 04:31:47 -- host/digest.sh@97 -- # killprocess 54588 00:34:33.635 04:31:47 -- common/autotest_common.sh@926 -- # '[' -z 54588 ']' 00:34:33.635 04:31:47 -- common/autotest_common.sh@930 -- # kill -0 54588 00:34:33.635 04:31:47 -- common/autotest_common.sh@931 -- # uname 00:34:33.635 04:31:47 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:34:33.635 04:31:47 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 54588 00:34:33.635 04:31:48 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:34:33.635 04:31:48 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:34:33.635 04:31:48 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 54588' 00:34:33.635 killing process with pid 54588 00:34:33.635 04:31:48 -- common/autotest_common.sh@945 -- # kill 54588 00:34:33.635 Received shutdown signal, test time was about 2.000000 seconds 00:34:33.635 00:34:33.635 Latency(us) 00:34:33.635 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:33.635 =================================================================================================================== 00:34:33.635 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:34:33.635 04:31:48 -- common/autotest_common.sh@950 -- # wait 54588 00:34:35.536 04:31:49 -- host/digest.sh@125 -- # run_bperf randwrite 131072 16 00:34:35.536 04:31:49 -- host/digest.sh@77 -- # local rw bs qd 00:34:35.536 04:31:49 -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:34:35.536 04:31:49 -- host/digest.sh@80 -- # rw=randwrite 00:34:35.536 04:31:49 -- host/digest.sh@80 -- # bs=131072 00:34:35.536 04:31:49 -- host/digest.sh@80 -- # qd=16 00:34:35.536 04:31:49 -- host/digest.sh@82 -- # bperfpid=56802 00:34:35.536 04:31:49 -- host/digest.sh@83 -- # waitforlisten 56802 /var/tmp/bperf.sock 00:34:35.536 04:31:49 -- common/autotest_common.sh@819 -- # '[' -z 56802 ']' 00:34:35.536 04:31:49 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bperf.sock 00:34:35.536 04:31:49 -- common/autotest_common.sh@824 -- # local max_retries=100 00:34:35.536 04:31:49 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:34:35.536 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:34:35.536 04:31:49 -- common/autotest_common.sh@828 -- # xtrace_disable 00:34:35.536 04:31:49 -- common/autotest_common.sh@10 -- # set +x 00:34:35.536 04:31:49 -- host/digest.sh@81 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:34:35.536 [2024-05-14 04:31:50.022819] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:34:35.536 [2024-05-14 04:31:50.022943] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56802 ] 00:34:35.536 I/O size of 131072 is greater than zero copy threshold (65536). 00:34:35.536 Zero copy mechanism will not be used. 00:34:35.536 EAL: No free 2048 kB hugepages reported on node 1 00:34:35.797 [2024-05-14 04:31:50.151087] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:35.797 [2024-05-14 04:31:50.241421] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:34:36.367 04:31:50 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:34:36.367 04:31:50 -- common/autotest_common.sh@852 -- # return 0 00:34:36.367 04:31:50 -- host/digest.sh@85 -- # [[ 1 -eq 1 ]] 00:34:36.367 04:31:50 -- host/digest.sh@85 -- # bperf_rpc dsa_scan_accel_module 00:34:36.367 04:31:50 -- host/digest.sh@18 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock dsa_scan_accel_module 00:34:36.367 [2024-05-14 04:31:50.857970] accel_dsa_rpc.c: 50:rpc_dsa_scan_accel_module: *NOTICE*: Enabled DSA user-mode 00:34:36.367 04:31:50 -- host/digest.sh@86 -- # bperf_rpc framework_start_init 00:34:36.367 04:31:50 -- host/digest.sh@18 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:34:42.937 04:31:57 -- host/digest.sh@88 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:34:42.937 04:31:57 -- host/digest.sh@18 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:34:42.937 nvme0n1 00:34:42.937 04:31:57 -- host/digest.sh@91 -- # bperf_py perform_tests 00:34:42.937 04:31:57 -- host/digest.sh@19 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:34:42.937 I/O size of 131072 is greater than zero copy threshold (65536). 00:34:42.937 Zero copy mechanism will not be used. 00:34:42.937 Running I/O for 2 seconds... 00:34:45.470 00:34:45.470 Latency(us) 00:34:45.470 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:45.470 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:34:45.470 nvme0n1 : 2.00 7371.91 921.49 0.00 0.00 2166.79 1534.92 9588.95 00:34:45.470 =================================================================================================================== 00:34:45.470 Total : 7371.91 921.49 0.00 0.00 2166.79 1534.92 9588.95 00:34:45.470 0 00:34:45.470 04:31:59 -- host/digest.sh@92 -- # read -r acc_module acc_executed 00:34:45.470 04:31:59 -- host/digest.sh@92 -- # get_accel_stats 00:34:45.470 04:31:59 -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:34:45.470 04:31:59 -- host/digest.sh@37 -- # jq -rc '.operations[] 00:34:45.470 | select(.opcode=="crc32c") 00:34:45.470 | "\(.module_name) \(.executed)"' 00:34:45.470 04:31:59 -- host/digest.sh@18 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:34:45.470 04:31:59 -- host/digest.sh@93 -- # [[ 1 -eq 1 ]] 00:34:45.470 04:31:59 -- host/digest.sh@93 -- # exp_module=dsa 00:34:45.470 04:31:59 -- host/digest.sh@94 -- # (( acc_executed > 0 )) 00:34:45.470 04:31:59 -- host/digest.sh@95 -- # [[ dsa == \d\s\a ]] 00:34:45.470 04:31:59 -- host/digest.sh@97 -- # killprocess 56802 00:34:45.470 04:31:59 -- common/autotest_common.sh@926 -- # '[' -z 56802 ']' 00:34:45.470 04:31:59 -- common/autotest_common.sh@930 -- # kill -0 56802 00:34:45.470 04:31:59 -- common/autotest_common.sh@931 -- # uname 00:34:45.471 04:31:59 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:34:45.471 04:31:59 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 56802 00:34:45.471 04:31:59 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:34:45.471 04:31:59 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:34:45.471 04:31:59 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 56802' 00:34:45.471 killing process with pid 56802 00:34:45.471 04:31:59 -- common/autotest_common.sh@945 -- # kill 56802 00:34:45.471 Received shutdown signal, test time was about 2.000000 seconds 00:34:45.471 00:34:45.471 Latency(us) 00:34:45.471 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:45.471 =================================================================================================================== 00:34:45.471 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:34:45.471 04:31:59 -- common/autotest_common.sh@950 -- # wait 56802 00:34:47.372 04:32:01 -- host/digest.sh@126 -- # killprocess 49934 00:34:47.372 04:32:01 -- common/autotest_common.sh@926 -- # '[' -z 49934 ']' 00:34:47.372 04:32:01 -- common/autotest_common.sh@930 -- # kill -0 49934 00:34:47.372 04:32:01 -- common/autotest_common.sh@931 -- # uname 00:34:47.372 04:32:01 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:34:47.372 04:32:01 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 49934 00:34:47.372 04:32:01 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:34:47.372 04:32:01 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:34:47.372 04:32:01 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 49934' 00:34:47.372 killing process with pid 49934 00:34:47.372 04:32:01 -- common/autotest_common.sh@945 -- # kill 49934 00:34:47.372 04:32:01 -- common/autotest_common.sh@950 -- # wait 49934 00:34:47.631 00:34:47.631 real 0m48.066s 00:34:47.631 user 1m8.355s 00:34:47.631 sys 0m3.870s 00:34:47.631 04:32:02 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:34:47.631 04:32:02 -- common/autotest_common.sh@10 -- # set +x 00:34:47.631 ************************************ 00:34:47.631 END TEST nvmf_digest_clean 00:34:47.631 ************************************ 00:34:47.631 04:32:02 -- host/digest.sh@136 -- # run_test nvmf_digest_error run_digest_error 00:34:47.631 04:32:02 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:34:47.631 04:32:02 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:34:47.631 04:32:02 -- common/autotest_common.sh@10 -- # set +x 00:34:47.631 ************************************ 00:34:47.631 START TEST nvmf_digest_error 00:34:47.631 ************************************ 00:34:47.631 04:32:02 -- common/autotest_common.sh@1104 -- # run_digest_error 00:34:47.631 04:32:02 -- host/digest.sh@101 -- # nvmfappstart --wait-for-rpc 00:34:47.631 04:32:02 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:34:47.631 04:32:02 -- common/autotest_common.sh@712 -- # xtrace_disable 00:34:47.631 04:32:02 -- common/autotest_common.sh@10 -- # set +x 00:34:47.631 04:32:02 -- nvmf/common.sh@469 -- # nvmfpid=59167 00:34:47.631 04:32:02 -- nvmf/common.sh@470 -- # waitforlisten 59167 00:34:47.631 04:32:02 -- common/autotest_common.sh@819 -- # '[' -z 59167 ']' 00:34:47.631 04:32:02 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:47.631 04:32:02 -- common/autotest_common.sh@824 -- # local max_retries=100 00:34:47.631 04:32:02 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:47.631 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:47.631 04:32:02 -- common/autotest_common.sh@828 -- # xtrace_disable 00:34:47.631 04:32:02 -- common/autotest_common.sh@10 -- # set +x 00:34:47.631 04:32:02 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:34:47.890 [2024-05-14 04:32:02.251365] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:34:47.890 [2024-05-14 04:32:02.251447] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:47.890 EAL: No free 2048 kB hugepages reported on node 1 00:34:47.890 [2024-05-14 04:32:02.342761] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:47.890 [2024-05-14 04:32:02.432005] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:34:47.890 [2024-05-14 04:32:02.432162] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:47.890 [2024-05-14 04:32:02.432174] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:34:47.890 [2024-05-14 04:32:02.432183] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:47.890 [2024-05-14 04:32:02.432213] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:34:48.454 04:32:02 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:34:48.454 04:32:02 -- common/autotest_common.sh@852 -- # return 0 00:34:48.454 04:32:02 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:34:48.454 04:32:02 -- common/autotest_common.sh@718 -- # xtrace_disable 00:34:48.454 04:32:02 -- common/autotest_common.sh@10 -- # set +x 00:34:48.454 04:32:02 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:34:48.454 04:32:02 -- host/digest.sh@103 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:34:48.454 04:32:02 -- common/autotest_common.sh@551 -- # xtrace_disable 00:34:48.454 04:32:02 -- common/autotest_common.sh@10 -- # set +x 00:34:48.454 [2024-05-14 04:32:02.996664] accel_rpc.c: 168:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:34:48.454 04:32:02 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:34:48.454 04:32:02 -- host/digest.sh@104 -- # common_target_config 00:34:48.454 04:32:03 -- host/digest.sh@43 -- # rpc_cmd 00:34:48.454 04:32:03 -- common/autotest_common.sh@551 -- # xtrace_disable 00:34:48.454 04:32:03 -- common/autotest_common.sh@10 -- # set +x 00:34:48.711 null0 00:34:48.711 [2024-05-14 04:32:03.150559] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:48.711 [2024-05-14 04:32:03.174691] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:48.711 04:32:03 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:34:48.711 04:32:03 -- host/digest.sh@107 -- # run_bperf_err randread 4096 128 00:34:48.711 04:32:03 -- host/digest.sh@54 -- # local rw bs qd 00:34:48.711 04:32:03 -- host/digest.sh@56 -- # rw=randread 00:34:48.711 04:32:03 -- host/digest.sh@56 -- # bs=4096 00:34:48.711 04:32:03 -- host/digest.sh@56 -- # qd=128 00:34:48.711 04:32:03 -- host/digest.sh@58 -- # bperfpid=59479 00:34:48.711 04:32:03 -- host/digest.sh@60 -- # waitforlisten 59479 /var/tmp/bperf.sock 00:34:48.711 04:32:03 -- common/autotest_common.sh@819 -- # '[' -z 59479 ']' 00:34:48.711 04:32:03 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bperf.sock 00:34:48.711 04:32:03 -- common/autotest_common.sh@824 -- # local max_retries=100 00:34:48.711 04:32:03 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:34:48.711 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:34:48.711 04:32:03 -- common/autotest_common.sh@828 -- # xtrace_disable 00:34:48.711 04:32:03 -- common/autotest_common.sh@10 -- # set +x 00:34:48.711 04:32:03 -- host/digest.sh@57 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:34:48.711 [2024-05-14 04:32:03.246129] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:34:48.711 [2024-05-14 04:32:03.246237] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59479 ] 00:34:48.970 EAL: No free 2048 kB hugepages reported on node 1 00:34:48.970 [2024-05-14 04:32:03.356319] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:48.970 [2024-05-14 04:32:03.444318] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:34:49.540 04:32:03 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:34:49.540 04:32:03 -- common/autotest_common.sh@852 -- # return 0 00:34:49.540 04:32:03 -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:34:49.540 04:32:03 -- host/digest.sh@18 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:34:49.540 04:32:04 -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:34:49.540 04:32:04 -- common/autotest_common.sh@551 -- # xtrace_disable 00:34:49.540 04:32:04 -- common/autotest_common.sh@10 -- # set +x 00:34:49.540 04:32:04 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:34:49.540 04:32:04 -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:34:49.540 04:32:04 -- host/digest.sh@18 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:34:50.105 nvme0n1 00:34:50.105 04:32:04 -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:34:50.105 04:32:04 -- common/autotest_common.sh@551 -- # xtrace_disable 00:34:50.105 04:32:04 -- common/autotest_common.sh@10 -- # set +x 00:34:50.105 04:32:04 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:34:50.105 04:32:04 -- host/digest.sh@69 -- # bperf_py perform_tests 00:34:50.105 04:32:04 -- host/digest.sh@19 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:34:50.105 Running I/O for 2 seconds... 00:34:50.105 [2024-05-14 04:32:04.579773] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:50.105 [2024-05-14 04:32:04.579818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:5628 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:50.105 [2024-05-14 04:32:04.579833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:50.105 [2024-05-14 04:32:04.592406] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:50.105 [2024-05-14 04:32:04.592437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:9744 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:50.105 [2024-05-14 04:32:04.592450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:50.105 [2024-05-14 04:32:04.604805] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:50.105 [2024-05-14 04:32:04.604835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:11583 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:50.105 [2024-05-14 04:32:04.604846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:50.105 [2024-05-14 04:32:04.612456] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:50.105 [2024-05-14 04:32:04.612483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:25477 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:50.105 [2024-05-14 04:32:04.612493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:50.105 [2024-05-14 04:32:04.624048] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:50.105 [2024-05-14 04:32:04.624075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:19736 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:50.105 [2024-05-14 04:32:04.624085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:50.105 [2024-05-14 04:32:04.636555] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:50.105 [2024-05-14 04:32:04.636580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:22054 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:50.105 [2024-05-14 04:32:04.636591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:50.105 [2024-05-14 04:32:04.649345] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:50.105 [2024-05-14 04:32:04.649377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:9513 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:50.105 [2024-05-14 04:32:04.649388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:50.105 [2024-05-14 04:32:04.661099] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:50.105 [2024-05-14 04:32:04.661127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:949 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:50.105 [2024-05-14 04:32:04.661136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:50.105 [2024-05-14 04:32:04.673687] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:50.105 [2024-05-14 04:32:04.673714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:23714 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:50.105 [2024-05-14 04:32:04.673724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:50.105 [2024-05-14 04:32:04.686109] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:50.105 [2024-05-14 04:32:04.686135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:11637 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:50.105 [2024-05-14 04:32:04.686150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:50.363 [2024-05-14 04:32:04.704132] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:50.363 [2024-05-14 04:32:04.704159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:11237 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:50.363 [2024-05-14 04:32:04.704168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:50.363 [2024-05-14 04:32:04.716543] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:50.363 [2024-05-14 04:32:04.716572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:2905 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:50.363 [2024-05-14 04:32:04.716583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:50.363 [2024-05-14 04:32:04.728858] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:50.363 [2024-05-14 04:32:04.728883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:13632 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:50.363 [2024-05-14 04:32:04.728893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:50.363 [2024-05-14 04:32:04.741423] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:50.363 [2024-05-14 04:32:04.741448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:6704 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:50.363 [2024-05-14 04:32:04.741458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:50.363 [2024-05-14 04:32:04.753787] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:50.363 [2024-05-14 04:32:04.753814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18579 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:50.363 [2024-05-14 04:32:04.753824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:50.363 [2024-05-14 04:32:04.766250] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:50.363 [2024-05-14 04:32:04.766276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:4314 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:50.363 [2024-05-14 04:32:04.766286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:50.363 [2024-05-14 04:32:04.778480] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:50.363 [2024-05-14 04:32:04.778515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:6664 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:50.363 [2024-05-14 04:32:04.778525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:50.363 [2024-05-14 04:32:04.790847] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:50.363 [2024-05-14 04:32:04.790871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:6263 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:50.363 [2024-05-14 04:32:04.790881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:50.363 [2024-05-14 04:32:04.803029] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:50.363 [2024-05-14 04:32:04.803053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:20415 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:50.363 [2024-05-14 04:32:04.803063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:50.363 [2024-05-14 04:32:04.815178] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:50.364 [2024-05-14 04:32:04.815209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:18323 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:50.364 [2024-05-14 04:32:04.815220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:50.364 [2024-05-14 04:32:04.827455] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:50.364 [2024-05-14 04:32:04.827479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19070 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:50.364 [2024-05-14 04:32:04.827489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:50.364 [2024-05-14 04:32:04.839697] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:50.364 [2024-05-14 04:32:04.839721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:11355 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:50.364 [2024-05-14 04:32:04.839731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:50.364 [2024-05-14 04:32:04.851760] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:50.364 [2024-05-14 04:32:04.851785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:18790 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:50.364 [2024-05-14 04:32:04.851795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:50.364 [2024-05-14 04:32:04.864042] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:50.364 [2024-05-14 04:32:04.864068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:15820 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:50.364 [2024-05-14 04:32:04.864079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:50.364 [2024-05-14 04:32:04.876257] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:50.364 [2024-05-14 04:32:04.876284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:21658 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:50.364 [2024-05-14 04:32:04.876294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:50.364 [2024-05-14 04:32:04.888318] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:50.364 [2024-05-14 04:32:04.888343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:21472 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:50.364 [2024-05-14 04:32:04.888353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:50.364 [2024-05-14 04:32:04.900233] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:50.364 [2024-05-14 04:32:04.900259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:24310 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:50.364 [2024-05-14 04:32:04.900274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:50.364 [2024-05-14 04:32:04.912564] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:50.364 [2024-05-14 04:32:04.912594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:12991 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:50.364 [2024-05-14 04:32:04.912605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:50.364 [2024-05-14 04:32:04.924783] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:50.364 [2024-05-14 04:32:04.924808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:14015 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:50.364 [2024-05-14 04:32:04.924818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:50.364 [2024-05-14 04:32:04.937030] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:50.364 [2024-05-14 04:32:04.937057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:1269 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:50.364 [2024-05-14 04:32:04.937067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:50.364 [2024-05-14 04:32:04.949320] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:50.364 [2024-05-14 04:32:04.949348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:6477 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:50.364 [2024-05-14 04:32:04.949359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:50.622 [2024-05-14 04:32:04.961752] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:50.622 [2024-05-14 04:32:04.961779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:7319 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:50.622 [2024-05-14 04:32:04.961790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:50.622 [2024-05-14 04:32:04.974155] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:50.622 [2024-05-14 04:32:04.974180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:7620 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:50.622 [2024-05-14 04:32:04.974195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:50.622 [2024-05-14 04:32:04.986422] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:50.622 [2024-05-14 04:32:04.986447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:21372 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:50.622 [2024-05-14 04:32:04.986457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:50.622 [2024-05-14 04:32:04.998798] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:50.622 [2024-05-14 04:32:04.998823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:2745 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:50.623 [2024-05-14 04:32:04.998833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:50.623 [2024-05-14 04:32:05.011177] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:50.623 [2024-05-14 04:32:05.011208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:5739 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:50.623 [2024-05-14 04:32:05.011219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:50.623 [2024-05-14 04:32:05.023533] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:50.623 [2024-05-14 04:32:05.023558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:13037 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:50.623 [2024-05-14 04:32:05.023568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:50.623 [2024-05-14 04:32:05.035700] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:50.623 [2024-05-14 04:32:05.035729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:15175 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:50.623 [2024-05-14 04:32:05.035739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:50.623 [2024-05-14 04:32:05.047966] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:50.623 [2024-05-14 04:32:05.047990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:24823 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:50.623 [2024-05-14 04:32:05.048000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:50.623 [2024-05-14 04:32:05.060390] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:50.623 [2024-05-14 04:32:05.060425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:12869 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:50.623 [2024-05-14 04:32:05.060434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:50.623 [2024-05-14 04:32:05.072783] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:50.623 [2024-05-14 04:32:05.072811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:22002 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:50.623 [2024-05-14 04:32:05.072821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:50.623 [2024-05-14 04:32:05.085136] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:50.623 [2024-05-14 04:32:05.085160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:3608 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:50.623 [2024-05-14 04:32:05.085170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:50.623 [2024-05-14 04:32:05.097461] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:50.623 [2024-05-14 04:32:05.097491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:9011 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:50.623 [2024-05-14 04:32:05.097503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:50.623 [2024-05-14 04:32:05.109794] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:50.623 [2024-05-14 04:32:05.109820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:20256 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:50.623 [2024-05-14 04:32:05.109836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:50.623 [2024-05-14 04:32:05.121807] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:50.623 [2024-05-14 04:32:05.121834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:14070 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:50.623 [2024-05-14 04:32:05.121844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:50.623 [2024-05-14 04:32:05.135042] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:50.623 [2024-05-14 04:32:05.135074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:4511 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:50.623 [2024-05-14 04:32:05.135086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:50.623 [2024-05-14 04:32:05.148985] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:50.623 [2024-05-14 04:32:05.149014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:9228 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:50.623 [2024-05-14 04:32:05.149025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:50.623 [2024-05-14 04:32:05.156803] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:50.623 [2024-05-14 04:32:05.156829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:6884 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:50.623 [2024-05-14 04:32:05.156839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:50.623 [2024-05-14 04:32:05.167876] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:50.623 [2024-05-14 04:32:05.167903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:14479 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:50.623 [2024-05-14 04:32:05.167913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:50.623 [2024-05-14 04:32:05.179749] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:50.623 [2024-05-14 04:32:05.179777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:20863 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:50.623 [2024-05-14 04:32:05.179788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:50.623 [2024-05-14 04:32:05.190672] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:50.623 [2024-05-14 04:32:05.190700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:24464 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:50.623 [2024-05-14 04:32:05.190710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:50.623 [2024-05-14 04:32:05.199390] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:50.623 [2024-05-14 04:32:05.199415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:9324 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:50.623 [2024-05-14 04:32:05.199426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:50.882 [2024-05-14 04:32:05.210583] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:50.882 [2024-05-14 04:32:05.210610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:5572 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:50.882 [2024-05-14 04:32:05.210620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:50.882 [2024-05-14 04:32:05.222517] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:50.882 [2024-05-14 04:32:05.222541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:15689 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:50.882 [2024-05-14 04:32:05.222552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:50.882 [2024-05-14 04:32:05.233582] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:50.882 [2024-05-14 04:32:05.233608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:1339 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:50.882 [2024-05-14 04:32:05.233617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:50.882 [2024-05-14 04:32:05.242659] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:50.882 [2024-05-14 04:32:05.242683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:2432 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:50.882 [2024-05-14 04:32:05.242692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:50.882 [2024-05-14 04:32:05.254436] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:50.882 [2024-05-14 04:32:05.254464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:16394 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:50.882 [2024-05-14 04:32:05.254476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:50.882 [2024-05-14 04:32:05.268359] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:50.882 [2024-05-14 04:32:05.268387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:7806 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:50.882 [2024-05-14 04:32:05.268398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:50.882 [2024-05-14 04:32:05.280444] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:50.882 [2024-05-14 04:32:05.280471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:19963 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:50.882 [2024-05-14 04:32:05.280482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:50.882 [2024-05-14 04:32:05.292648] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:50.882 [2024-05-14 04:32:05.292673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:1242 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:50.882 [2024-05-14 04:32:05.292683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:50.882 [2024-05-14 04:32:05.304656] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:50.882 [2024-05-14 04:32:05.304682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:3396 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:50.882 [2024-05-14 04:32:05.304697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:50.882 [2024-05-14 04:32:05.316956] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:50.882 [2024-05-14 04:32:05.316984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:4752 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:50.882 [2024-05-14 04:32:05.316995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:50.882 [2024-05-14 04:32:05.324666] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:50.882 [2024-05-14 04:32:05.324691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:2606 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:50.883 [2024-05-14 04:32:05.324701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:50.883 [2024-05-14 04:32:05.335977] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:50.883 [2024-05-14 04:32:05.336003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:2084 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:50.883 [2024-05-14 04:32:05.336013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:50.883 [2024-05-14 04:32:05.348313] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:50.883 [2024-05-14 04:32:05.348340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:11723 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:50.883 [2024-05-14 04:32:05.348350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:50.883 [2024-05-14 04:32:05.360287] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:50.883 [2024-05-14 04:32:05.360316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:10463 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:50.883 [2024-05-14 04:32:05.360328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:50.883 [2024-05-14 04:32:05.372281] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:50.883 [2024-05-14 04:32:05.372308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:1954 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:50.883 [2024-05-14 04:32:05.372319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:50.883 [2024-05-14 04:32:05.384549] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:50.883 [2024-05-14 04:32:05.384575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:19959 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:50.883 [2024-05-14 04:32:05.384585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:50.883 [2024-05-14 04:32:05.396892] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:50.883 [2024-05-14 04:32:05.396918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:8803 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:50.883 [2024-05-14 04:32:05.396929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:50.883 [2024-05-14 04:32:05.409128] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:50.883 [2024-05-14 04:32:05.409159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:12479 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:50.883 [2024-05-14 04:32:05.409169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:50.883 [2024-05-14 04:32:05.421372] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:50.883 [2024-05-14 04:32:05.421399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:21396 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:50.883 [2024-05-14 04:32:05.421409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:50.883 [2024-05-14 04:32:05.433487] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:50.883 [2024-05-14 04:32:05.433518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:17373 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:50.883 [2024-05-14 04:32:05.433529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:50.883 [2024-05-14 04:32:05.445777] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:50.883 [2024-05-14 04:32:05.445804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:24128 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:50.883 [2024-05-14 04:32:05.445814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:50.883 [2024-05-14 04:32:05.457975] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:50.883 [2024-05-14 04:32:05.458000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:18289 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:50.883 [2024-05-14 04:32:05.458010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:51.143 [2024-05-14 04:32:05.470132] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:51.143 [2024-05-14 04:32:05.470160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:20655 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.143 [2024-05-14 04:32:05.470170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:51.143 [2024-05-14 04:32:05.482696] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:51.143 [2024-05-14 04:32:05.482724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:24654 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.143 [2024-05-14 04:32:05.482734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:51.143 [2024-05-14 04:32:05.494922] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:51.144 [2024-05-14 04:32:05.494946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:14603 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.144 [2024-05-14 04:32:05.494956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:51.144 [2024-05-14 04:32:05.507161] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:51.144 [2024-05-14 04:32:05.507190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:25125 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.144 [2024-05-14 04:32:05.507205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:51.144 [2024-05-14 04:32:05.519364] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:51.144 [2024-05-14 04:32:05.519388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:1300 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.144 [2024-05-14 04:32:05.519398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:51.144 [2024-05-14 04:32:05.531649] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:51.144 [2024-05-14 04:32:05.531675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:13494 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.144 [2024-05-14 04:32:05.531686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:51.144 [2024-05-14 04:32:05.543911] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:51.144 [2024-05-14 04:32:05.543938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:23766 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.144 [2024-05-14 04:32:05.543948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:51.144 [2024-05-14 04:32:05.556166] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:51.144 [2024-05-14 04:32:05.556196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:2046 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.144 [2024-05-14 04:32:05.556206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:51.144 [2024-05-14 04:32:05.568647] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:51.144 [2024-05-14 04:32:05.568677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:2127 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.144 [2024-05-14 04:32:05.568689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:51.144 [2024-05-14 04:32:05.580881] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:51.144 [2024-05-14 04:32:05.580907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:876 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.144 [2024-05-14 04:32:05.580917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:51.144 [2024-05-14 04:32:05.592962] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:51.144 [2024-05-14 04:32:05.592988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:19497 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.144 [2024-05-14 04:32:05.592998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:51.144 [2024-05-14 04:32:05.604878] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:51.144 [2024-05-14 04:32:05.604905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:23564 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.144 [2024-05-14 04:32:05.604915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:51.144 [2024-05-14 04:32:05.619697] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:51.144 [2024-05-14 04:32:05.619729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:7518 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.144 [2024-05-14 04:32:05.619740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:51.144 [2024-05-14 04:32:05.631834] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:51.144 [2024-05-14 04:32:05.631860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:19899 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.144 [2024-05-14 04:32:05.631871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:51.144 [2024-05-14 04:32:05.643895] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:51.144 [2024-05-14 04:32:05.643921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:5112 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.144 [2024-05-14 04:32:05.643933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:51.144 [2024-05-14 04:32:05.655117] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:51.144 [2024-05-14 04:32:05.655141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:6994 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.144 [2024-05-14 04:32:05.655151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:51.144 [2024-05-14 04:32:05.663577] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:51.144 [2024-05-14 04:32:05.663601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:2224 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.144 [2024-05-14 04:32:05.663611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:51.144 [2024-05-14 04:32:05.675668] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:51.144 [2024-05-14 04:32:05.675698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:15836 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.144 [2024-05-14 04:32:05.675709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:51.144 [2024-05-14 04:32:05.692617] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:51.144 [2024-05-14 04:32:05.692644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:4521 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.144 [2024-05-14 04:32:05.692654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:51.144 [2024-05-14 04:32:05.704940] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:51.144 [2024-05-14 04:32:05.704966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:4801 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.144 [2024-05-14 04:32:05.704975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:51.144 [2024-05-14 04:32:05.717246] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:51.144 [2024-05-14 04:32:05.717272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:19998 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.144 [2024-05-14 04:32:05.717288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:51.144 [2024-05-14 04:32:05.729581] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:51.144 [2024-05-14 04:32:05.729612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:17657 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.144 [2024-05-14 04:32:05.729623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:51.406 [2024-05-14 04:32:05.741742] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:51.406 [2024-05-14 04:32:05.741769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:23825 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.406 [2024-05-14 04:32:05.741779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:51.406 [2024-05-14 04:32:05.749580] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:51.406 [2024-05-14 04:32:05.749605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:16698 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.406 [2024-05-14 04:32:05.749615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:51.406 [2024-05-14 04:32:05.760935] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:51.406 [2024-05-14 04:32:05.760962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:24314 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.406 [2024-05-14 04:32:05.760974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:51.406 [2024-05-14 04:32:05.773148] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:51.406 [2024-05-14 04:32:05.773175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:17706 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.406 [2024-05-14 04:32:05.773188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:51.406 [2024-05-14 04:32:05.785559] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:51.406 [2024-05-14 04:32:05.785584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:22135 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.406 [2024-05-14 04:32:05.785595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:51.406 [2024-05-14 04:32:05.797861] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:51.407 [2024-05-14 04:32:05.797887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5518 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.407 [2024-05-14 04:32:05.797905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:51.407 [2024-05-14 04:32:05.810174] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:51.407 [2024-05-14 04:32:05.810208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:13311 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.407 [2024-05-14 04:32:05.810220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:51.407 [2024-05-14 04:32:05.822552] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:51.407 [2024-05-14 04:32:05.822585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:19322 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.407 [2024-05-14 04:32:05.822595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:51.407 [2024-05-14 04:32:05.834847] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:51.407 [2024-05-14 04:32:05.834874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:2190 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.407 [2024-05-14 04:32:05.834885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:51.407 [2024-05-14 04:32:05.847217] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:51.407 [2024-05-14 04:32:05.847242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:11393 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.407 [2024-05-14 04:32:05.847252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:51.407 [2024-05-14 04:32:05.859416] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:51.407 [2024-05-14 04:32:05.859441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:12122 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.407 [2024-05-14 04:32:05.859451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:51.407 [2024-05-14 04:32:05.871969] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:51.407 [2024-05-14 04:32:05.872000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:25059 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.407 [2024-05-14 04:32:05.872012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:51.407 [2024-05-14 04:32:05.883968] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:51.407 [2024-05-14 04:32:05.883995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:3710 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.407 [2024-05-14 04:32:05.884006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:51.407 [2024-05-14 04:32:05.896086] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:51.407 [2024-05-14 04:32:05.896112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:17091 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.407 [2024-05-14 04:32:05.896122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:51.407 [2024-05-14 04:32:05.908193] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:51.407 [2024-05-14 04:32:05.908219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:2732 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.407 [2024-05-14 04:32:05.908229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:51.407 [2024-05-14 04:32:05.920735] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:51.407 [2024-05-14 04:32:05.920762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:6477 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.407 [2024-05-14 04:32:05.920776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:51.407 [2024-05-14 04:32:05.933171] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:51.407 [2024-05-14 04:32:05.933206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:8668 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.407 [2024-05-14 04:32:05.933217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:51.407 [2024-05-14 04:32:05.945429] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:51.407 [2024-05-14 04:32:05.945457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:17140 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.407 [2024-05-14 04:32:05.945467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:51.407 [2024-05-14 04:32:05.957780] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:51.407 [2024-05-14 04:32:05.957807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:21759 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.407 [2024-05-14 04:32:05.957817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:51.407 [2024-05-14 04:32:05.969927] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:51.407 [2024-05-14 04:32:05.969955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:22419 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.407 [2024-05-14 04:32:05.969966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:51.407 [2024-05-14 04:32:05.982325] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:51.407 [2024-05-14 04:32:05.982352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13484 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.407 [2024-05-14 04:32:05.982363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:51.667 [2024-05-14 04:32:05.994630] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:51.667 [2024-05-14 04:32:05.994658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:5032 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.667 [2024-05-14 04:32:05.994669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:51.667 [2024-05-14 04:32:06.002599] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:51.667 [2024-05-14 04:32:06.002623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:1234 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.667 [2024-05-14 04:32:06.002633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:51.667 [2024-05-14 04:32:06.014224] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:51.667 [2024-05-14 04:32:06.014253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:22909 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.667 [2024-05-14 04:32:06.014265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:51.667 [2024-05-14 04:32:06.026880] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:51.667 [2024-05-14 04:32:06.026911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:4965 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.667 [2024-05-14 04:32:06.026922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:51.667 [2024-05-14 04:32:06.043913] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:51.667 [2024-05-14 04:32:06.043941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:19806 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.667 [2024-05-14 04:32:06.043951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:51.667 [2024-05-14 04:32:06.051871] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:51.667 [2024-05-14 04:32:06.051897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:22034 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.667 [2024-05-14 04:32:06.051907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:51.667 [2024-05-14 04:32:06.063304] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:51.667 [2024-05-14 04:32:06.063330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:23749 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.667 [2024-05-14 04:32:06.063340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:51.667 [2024-05-14 04:32:06.076024] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:51.667 [2024-05-14 04:32:06.076054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:24300 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.667 [2024-05-14 04:32:06.076064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:51.667 [2024-05-14 04:32:06.088326] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:51.667 [2024-05-14 04:32:06.088353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:23689 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.667 [2024-05-14 04:32:06.088363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:51.667 [2024-05-14 04:32:06.100679] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:51.667 [2024-05-14 04:32:06.100704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:4125 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.667 [2024-05-14 04:32:06.100714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:51.667 [2024-05-14 04:32:06.112900] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:51.667 [2024-05-14 04:32:06.112927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16916 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.667 [2024-05-14 04:32:06.112937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:51.667 [2024-05-14 04:32:06.125224] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:51.667 [2024-05-14 04:32:06.125248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20119 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.667 [2024-05-14 04:32:06.125258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:51.667 [2024-05-14 04:32:06.137366] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:51.667 [2024-05-14 04:32:06.137390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:20364 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.667 [2024-05-14 04:32:06.137400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:51.667 [2024-05-14 04:32:06.149833] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:51.667 [2024-05-14 04:32:06.149858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:10176 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.667 [2024-05-14 04:32:06.149868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:51.667 [2024-05-14 04:32:06.162377] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:51.667 [2024-05-14 04:32:06.162403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:24325 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.667 [2024-05-14 04:32:06.162413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:51.667 [2024-05-14 04:32:06.174521] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:51.667 [2024-05-14 04:32:06.174549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:12842 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.667 [2024-05-14 04:32:06.174559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:51.667 [2024-05-14 04:32:06.187223] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:51.667 [2024-05-14 04:32:06.187248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:16566 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.667 [2024-05-14 04:32:06.187259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:51.667 [2024-05-14 04:32:06.199687] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:51.668 [2024-05-14 04:32:06.199714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:7481 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.668 [2024-05-14 04:32:06.199725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:51.668 [2024-05-14 04:32:06.211926] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:51.668 [2024-05-14 04:32:06.211951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:21536 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.668 [2024-05-14 04:32:06.211961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:51.668 [2024-05-14 04:32:06.224533] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:51.668 [2024-05-14 04:32:06.224558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:15324 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.668 [2024-05-14 04:32:06.224569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:51.668 [2024-05-14 04:32:06.236651] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:51.668 [2024-05-14 04:32:06.236680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:7241 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.668 [2024-05-14 04:32:06.236690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:51.668 [2024-05-14 04:32:06.248991] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:51.668 [2024-05-14 04:32:06.249017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:16949 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.668 [2024-05-14 04:32:06.249027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:51.928 [2024-05-14 04:32:06.261218] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:51.928 [2024-05-14 04:32:06.261244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:4832 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.928 [2024-05-14 04:32:06.261254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:51.928 [2024-05-14 04:32:06.273560] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:51.928 [2024-05-14 04:32:06.273585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:25170 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.928 [2024-05-14 04:32:06.273596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:51.928 [2024-05-14 04:32:06.285723] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:51.928 [2024-05-14 04:32:06.285748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:22135 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.928 [2024-05-14 04:32:06.285757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:51.928 [2024-05-14 04:32:06.298005] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:51.928 [2024-05-14 04:32:06.298030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:18386 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.928 [2024-05-14 04:32:06.298040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:51.928 [2024-05-14 04:32:06.310291] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:51.928 [2024-05-14 04:32:06.310316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:5037 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.928 [2024-05-14 04:32:06.310326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:51.928 [2024-05-14 04:32:06.322526] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:51.928 [2024-05-14 04:32:06.322558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:17057 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.928 [2024-05-14 04:32:06.322568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:51.928 [2024-05-14 04:32:06.334781] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:51.928 [2024-05-14 04:32:06.334806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:19809 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.928 [2024-05-14 04:32:06.334816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:51.928 [2024-05-14 04:32:06.346870] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:51.928 [2024-05-14 04:32:06.346895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:13469 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.928 [2024-05-14 04:32:06.346905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:51.928 [2024-05-14 04:32:06.359274] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:51.928 [2024-05-14 04:32:06.359299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:993 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.928 [2024-05-14 04:32:06.359310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:51.928 [2024-05-14 04:32:06.371488] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:51.928 [2024-05-14 04:32:06.371517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:2639 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.928 [2024-05-14 04:32:06.371528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:51.928 [2024-05-14 04:32:06.383686] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:51.928 [2024-05-14 04:32:06.383711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:4889 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.928 [2024-05-14 04:32:06.383721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:51.928 [2024-05-14 04:32:06.396057] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:51.928 [2024-05-14 04:32:06.396083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:10900 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.928 [2024-05-14 04:32:06.396093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:51.928 [2024-05-14 04:32:06.408137] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:51.928 [2024-05-14 04:32:06.408162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:19398 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.928 [2024-05-14 04:32:06.408172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:51.928 [2024-05-14 04:32:06.420359] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:51.928 [2024-05-14 04:32:06.420384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:23889 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.928 [2024-05-14 04:32:06.420394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:51.928 [2024-05-14 04:32:06.432755] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:51.928 [2024-05-14 04:32:06.432781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:7072 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.928 [2024-05-14 04:32:06.432791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:51.928 [2024-05-14 04:32:06.445133] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:51.929 [2024-05-14 04:32:06.445163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:3364 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.929 [2024-05-14 04:32:06.445173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:51.929 [2024-05-14 04:32:06.457175] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:51.929 [2024-05-14 04:32:06.457206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:10388 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.929 [2024-05-14 04:32:06.457217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:51.929 [2024-05-14 04:32:06.469155] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:51.929 [2024-05-14 04:32:06.469181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:6159 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.929 [2024-05-14 04:32:06.469194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:51.929 [2024-05-14 04:32:06.482010] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:51.929 [2024-05-14 04:32:06.482035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:19177 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.929 [2024-05-14 04:32:06.482045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:51.929 [2024-05-14 04:32:06.494266] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:51.929 [2024-05-14 04:32:06.494292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:9002 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.929 [2024-05-14 04:32:06.494301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:51.929 [2024-05-14 04:32:06.506587] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:51.929 [2024-05-14 04:32:06.506614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:13943 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.929 [2024-05-14 04:32:06.506624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:52.188 [2024-05-14 04:32:06.519124] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:52.188 [2024-05-14 04:32:06.519151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:10586 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:52.188 [2024-05-14 04:32:06.519160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:52.188 [2024-05-14 04:32:06.531756] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:52.188 [2024-05-14 04:32:06.531781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:8916 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:52.188 [2024-05-14 04:32:06.531791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:52.188 [2024-05-14 04:32:06.544276] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:52.188 [2024-05-14 04:32:06.544301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:246 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:52.188 [2024-05-14 04:32:06.544311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:52.188 [2024-05-14 04:32:06.556194] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:52.188 [2024-05-14 04:32:06.556221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:8839 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:52.188 [2024-05-14 04:32:06.556230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:52.188 00:34:52.188 Latency(us) 00:34:52.188 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:52.188 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:34:52.188 nvme0n1 : 2.00 20911.60 81.69 0.00 0.00 6115.99 2500.72 24144.84 00:34:52.188 =================================================================================================================== 00:34:52.188 Total : 20911.60 81.69 0.00 0.00 6115.99 2500.72 24144.84 00:34:52.189 0 00:34:52.189 04:32:06 -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:34:52.189 04:32:06 -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:34:52.189 04:32:06 -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:34:52.189 | .driver_specific 00:34:52.189 | .nvme_error 00:34:52.189 | .status_code 00:34:52.189 | .command_transient_transport_error' 00:34:52.189 04:32:06 -- host/digest.sh@18 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:34:52.189 04:32:06 -- host/digest.sh@71 -- # (( 164 > 0 )) 00:34:52.189 04:32:06 -- host/digest.sh@73 -- # killprocess 59479 00:34:52.189 04:32:06 -- common/autotest_common.sh@926 -- # '[' -z 59479 ']' 00:34:52.189 04:32:06 -- common/autotest_common.sh@930 -- # kill -0 59479 00:34:52.189 04:32:06 -- common/autotest_common.sh@931 -- # uname 00:34:52.189 04:32:06 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:34:52.189 04:32:06 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 59479 00:34:52.189 04:32:06 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:34:52.189 04:32:06 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:34:52.189 04:32:06 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 59479' 00:34:52.189 killing process with pid 59479 00:34:52.189 04:32:06 -- common/autotest_common.sh@945 -- # kill 59479 00:34:52.189 Received shutdown signal, test time was about 2.000000 seconds 00:34:52.189 00:34:52.189 Latency(us) 00:34:52.189 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:52.189 =================================================================================================================== 00:34:52.189 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:34:52.189 04:32:06 -- common/autotest_common.sh@950 -- # wait 59479 00:34:52.754 04:32:07 -- host/digest.sh@108 -- # run_bperf_err randread 131072 16 00:34:52.754 04:32:07 -- host/digest.sh@54 -- # local rw bs qd 00:34:52.754 04:32:07 -- host/digest.sh@56 -- # rw=randread 00:34:52.754 04:32:07 -- host/digest.sh@56 -- # bs=131072 00:34:52.754 04:32:07 -- host/digest.sh@56 -- # qd=16 00:34:52.754 04:32:07 -- host/digest.sh@58 -- # bperfpid=60119 00:34:52.754 04:32:07 -- host/digest.sh@60 -- # waitforlisten 60119 /var/tmp/bperf.sock 00:34:52.754 04:32:07 -- common/autotest_common.sh@819 -- # '[' -z 60119 ']' 00:34:52.754 04:32:07 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bperf.sock 00:34:52.754 04:32:07 -- common/autotest_common.sh@824 -- # local max_retries=100 00:34:52.754 04:32:07 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:34:52.754 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:34:52.754 04:32:07 -- common/autotest_common.sh@828 -- # xtrace_disable 00:34:52.754 04:32:07 -- common/autotest_common.sh@10 -- # set +x 00:34:52.754 04:32:07 -- host/digest.sh@57 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:34:52.754 [2024-05-14 04:32:07.196934] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:34:52.754 [2024-05-14 04:32:07.197045] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60119 ] 00:34:52.754 I/O size of 131072 is greater than zero copy threshold (65536). 00:34:52.754 Zero copy mechanism will not be used. 00:34:52.754 EAL: No free 2048 kB hugepages reported on node 1 00:34:52.754 [2024-05-14 04:32:07.306366] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:53.050 [2024-05-14 04:32:07.396465] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:34:53.308 04:32:07 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:34:53.308 04:32:07 -- common/autotest_common.sh@852 -- # return 0 00:34:53.308 04:32:07 -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:34:53.308 04:32:07 -- host/digest.sh@18 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:34:53.568 04:32:08 -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:34:53.568 04:32:08 -- common/autotest_common.sh@551 -- # xtrace_disable 00:34:53.568 04:32:08 -- common/autotest_common.sh@10 -- # set +x 00:34:53.568 04:32:08 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:34:53.568 04:32:08 -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:34:53.568 04:32:08 -- host/digest.sh@18 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:34:53.829 nvme0n1 00:34:53.829 04:32:08 -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:34:53.829 04:32:08 -- common/autotest_common.sh@551 -- # xtrace_disable 00:34:53.829 04:32:08 -- common/autotest_common.sh@10 -- # set +x 00:34:53.829 04:32:08 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:34:53.829 04:32:08 -- host/digest.sh@69 -- # bperf_py perform_tests 00:34:53.829 04:32:08 -- host/digest.sh@19 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:34:54.090 I/O size of 131072 is greater than zero copy threshold (65536). 00:34:54.090 Zero copy mechanism will not be used. 00:34:54.090 Running I/O for 2 seconds... 00:34:54.091 [2024-05-14 04:32:08.455606] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:54.091 [2024-05-14 04:32:08.455651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:54.091 [2024-05-14 04:32:08.455666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:54.091 [2024-05-14 04:32:08.462853] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:54.091 [2024-05-14 04:32:08.462882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:54.091 [2024-05-14 04:32:08.462894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:54.091 [2024-05-14 04:32:08.469749] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:54.091 [2024-05-14 04:32:08.469780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:54.091 [2024-05-14 04:32:08.469791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:54.091 [2024-05-14 04:32:08.476579] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:54.091 [2024-05-14 04:32:08.476603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:54.091 [2024-05-14 04:32:08.476620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:54.091 [2024-05-14 04:32:08.483540] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:54.091 [2024-05-14 04:32:08.483564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:54.091 [2024-05-14 04:32:08.483575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:54.091 [2024-05-14 04:32:08.490351] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:54.091 [2024-05-14 04:32:08.490376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:54.091 [2024-05-14 04:32:08.490386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:54.091 [2024-05-14 04:32:08.497109] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:54.091 [2024-05-14 04:32:08.497132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:54.091 [2024-05-14 04:32:08.497142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:54.091 [2024-05-14 04:32:08.503904] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:54.091 [2024-05-14 04:32:08.503929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:54.091 [2024-05-14 04:32:08.503938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:54.091 [2024-05-14 04:32:08.510680] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:54.091 [2024-05-14 04:32:08.510705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:54.091 [2024-05-14 04:32:08.510715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:54.091 [2024-05-14 04:32:08.517512] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:54.091 [2024-05-14 04:32:08.517535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:54.091 [2024-05-14 04:32:08.517544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:54.091 [2024-05-14 04:32:08.524317] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:54.091 [2024-05-14 04:32:08.524339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:54.091 [2024-05-14 04:32:08.524349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:54.091 [2024-05-14 04:32:08.531167] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:54.091 [2024-05-14 04:32:08.531193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:54.091 [2024-05-14 04:32:08.531203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:54.091 [2024-05-14 04:32:08.537928] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:54.091 [2024-05-14 04:32:08.537953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:54.091 [2024-05-14 04:32:08.537963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:54.091 [2024-05-14 04:32:08.544741] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:54.091 [2024-05-14 04:32:08.544764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:54.091 [2024-05-14 04:32:08.544774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:54.091 [2024-05-14 04:32:08.551589] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:54.091 [2024-05-14 04:32:08.551612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:54.091 [2024-05-14 04:32:08.551622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:54.091 [2024-05-14 04:32:08.558556] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:54.091 [2024-05-14 04:32:08.558580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:54.091 [2024-05-14 04:32:08.558590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:54.091 [2024-05-14 04:32:08.565355] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:54.091 [2024-05-14 04:32:08.565386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:54.091 [2024-05-14 04:32:08.565396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:54.091 [2024-05-14 04:32:08.572118] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:54.091 [2024-05-14 04:32:08.572143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:54.091 [2024-05-14 04:32:08.572154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:54.091 [2024-05-14 04:32:08.578880] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:54.091 [2024-05-14 04:32:08.578904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:54.091 [2024-05-14 04:32:08.578914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:54.091 [2024-05-14 04:32:08.585648] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:54.091 [2024-05-14 04:32:08.585671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:54.091 [2024-05-14 04:32:08.585681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:54.091 [2024-05-14 04:32:08.592372] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:54.091 [2024-05-14 04:32:08.592396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:54.091 [2024-05-14 04:32:08.592412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:54.091 [2024-05-14 04:32:08.599042] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:54.091 [2024-05-14 04:32:08.599065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:54.091 [2024-05-14 04:32:08.599076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:54.091 [2024-05-14 04:32:08.605750] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:54.091 [2024-05-14 04:32:08.605775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:54.091 [2024-05-14 04:32:08.605785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:54.091 [2024-05-14 04:32:08.612595] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:54.091 [2024-05-14 04:32:08.612618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:54.091 [2024-05-14 04:32:08.612628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:54.091 [2024-05-14 04:32:08.619275] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:54.091 [2024-05-14 04:32:08.619299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:54.091 [2024-05-14 04:32:08.619311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:54.091 [2024-05-14 04:32:08.626026] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:54.091 [2024-05-14 04:32:08.626050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:54.091 [2024-05-14 04:32:08.626060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:54.091 [2024-05-14 04:32:08.632666] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:54.091 [2024-05-14 04:32:08.632689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:54.091 [2024-05-14 04:32:08.632699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:54.091 [2024-05-14 04:32:08.639333] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:54.092 [2024-05-14 04:32:08.639358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:54.092 [2024-05-14 04:32:08.639367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:54.092 [2024-05-14 04:32:08.645980] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:54.092 [2024-05-14 04:32:08.646004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:54.092 [2024-05-14 04:32:08.646014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:54.092 [2024-05-14 04:32:08.652686] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:54.092 [2024-05-14 04:32:08.652710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:54.092 [2024-05-14 04:32:08.652720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:54.092 [2024-05-14 04:32:08.659408] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:54.092 [2024-05-14 04:32:08.659434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:54.092 [2024-05-14 04:32:08.659444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:54.092 [2024-05-14 04:32:08.666156] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:54.092 [2024-05-14 04:32:08.666181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:54.092 [2024-05-14 04:32:08.666195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:54.092 [2024-05-14 04:32:08.672917] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:54.092 [2024-05-14 04:32:08.672943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:54.092 [2024-05-14 04:32:08.672954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:54.353 [2024-05-14 04:32:08.679593] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:54.353 [2024-05-14 04:32:08.679619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:54.353 [2024-05-14 04:32:08.679629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:54.353 [2024-05-14 04:32:08.686341] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:54.353 [2024-05-14 04:32:08.686363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:54.353 [2024-05-14 04:32:08.686373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:54.353 [2024-05-14 04:32:08.693108] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:54.353 [2024-05-14 04:32:08.693131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:54.353 [2024-05-14 04:32:08.693141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:54.353 [2024-05-14 04:32:08.699851] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:54.353 [2024-05-14 04:32:08.699874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:54.353 [2024-05-14 04:32:08.699883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:54.353 [2024-05-14 04:32:08.706538] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:54.353 [2024-05-14 04:32:08.706562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:54.353 [2024-05-14 04:32:08.706577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:54.353 [2024-05-14 04:32:08.713282] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:54.353 [2024-05-14 04:32:08.713308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:54.353 [2024-05-14 04:32:08.713319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:54.353 [2024-05-14 04:32:08.720364] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:54.353 [2024-05-14 04:32:08.720393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:54.353 [2024-05-14 04:32:08.720405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:54.353 [2024-05-14 04:32:08.727427] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:54.353 [2024-05-14 04:32:08.727454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:54.353 [2024-05-14 04:32:08.727466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:54.353 [2024-05-14 04:32:08.734256] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:54.353 [2024-05-14 04:32:08.734279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:54.353 [2024-05-14 04:32:08.734289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:54.353 [2024-05-14 04:32:08.741000] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:54.353 [2024-05-14 04:32:08.741023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:54.353 [2024-05-14 04:32:08.741035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:54.353 [2024-05-14 04:32:08.747775] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:54.353 [2024-05-14 04:32:08.747797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:54.353 [2024-05-14 04:32:08.747807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:54.353 [2024-05-14 04:32:08.755294] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:54.353 [2024-05-14 04:32:08.755318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:54.354 [2024-05-14 04:32:08.755328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:54.354 [2024-05-14 04:32:08.762747] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:54.354 [2024-05-14 04:32:08.762771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:54.354 [2024-05-14 04:32:08.762781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:54.354 [2024-05-14 04:32:08.771297] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:54.354 [2024-05-14 04:32:08.771321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:54.354 [2024-05-14 04:32:08.771331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:54.354 [2024-05-14 04:32:08.779973] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:54.354 [2024-05-14 04:32:08.779997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:54.354 [2024-05-14 04:32:08.780007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:54.354 [2024-05-14 04:32:08.788691] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:54.354 [2024-05-14 04:32:08.788715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:54.354 [2024-05-14 04:32:08.788725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:54.354 [2024-05-14 04:32:08.797202] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:54.354 [2024-05-14 04:32:08.797227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:54.354 [2024-05-14 04:32:08.797237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:54.354 [2024-05-14 04:32:08.805773] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:54.354 [2024-05-14 04:32:08.805797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:54.354 [2024-05-14 04:32:08.805807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:54.354 [2024-05-14 04:32:08.814535] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:54.354 [2024-05-14 04:32:08.814559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:54.354 [2024-05-14 04:32:08.814569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:54.354 [2024-05-14 04:32:08.823081] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:54.354 [2024-05-14 04:32:08.823105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:54.354 [2024-05-14 04:32:08.823115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:54.354 [2024-05-14 04:32:08.831917] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:54.354 [2024-05-14 04:32:08.831942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:54.354 [2024-05-14 04:32:08.831951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:54.354 [2024-05-14 04:32:08.838872] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:54.354 [2024-05-14 04:32:08.838895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:54.354 [2024-05-14 04:32:08.838912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:54.354 [2024-05-14 04:32:08.844498] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:54.354 [2024-05-14 04:32:08.844522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:54.354 [2024-05-14 04:32:08.844532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:54.354 [2024-05-14 04:32:08.851062] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:54.354 [2024-05-14 04:32:08.851084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:54.354 [2024-05-14 04:32:08.851094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:54.354 [2024-05-14 04:32:08.858027] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:54.354 [2024-05-14 04:32:08.858051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:54.354 [2024-05-14 04:32:08.858061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:54.354 [2024-05-14 04:32:08.862153] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:54.354 [2024-05-14 04:32:08.862183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:54.354 [2024-05-14 04:32:08.862199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:54.354 [2024-05-14 04:32:08.867032] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:54.354 [2024-05-14 04:32:08.867059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:10048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:54.354 [2024-05-14 04:32:08.867069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:54.354 [2024-05-14 04:32:08.871372] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:54.354 [2024-05-14 04:32:08.871397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:23296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:54.354 [2024-05-14 04:32:08.871408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:54.354 [2024-05-14 04:32:08.874460] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:54.354 [2024-05-14 04:32:08.874483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:21888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:54.354 [2024-05-14 04:32:08.874492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:54.354 [2024-05-14 04:32:08.879462] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:54.354 [2024-05-14 04:32:08.879487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:3648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:54.354 [2024-05-14 04:32:08.879498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:54.354 [2024-05-14 04:32:08.883419] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:54.354 [2024-05-14 04:32:08.883449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:12960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:54.354 [2024-05-14 04:32:08.883461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:54.354 [2024-05-14 04:32:08.887646] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:54.354 [2024-05-14 04:32:08.887671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:15392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:54.354 [2024-05-14 04:32:08.887682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:54.354 [2024-05-14 04:32:08.891383] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:54.354 [2024-05-14 04:32:08.891408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:7520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:54.354 [2024-05-14 04:32:08.891418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:54.354 [2024-05-14 04:32:08.895713] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:54.354 [2024-05-14 04:32:08.895737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:11136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:54.354 [2024-05-14 04:32:08.895749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:54.354 [2024-05-14 04:32:08.899728] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:54.354 [2024-05-14 04:32:08.899753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:2368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:54.354 [2024-05-14 04:32:08.899763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:54.354 [2024-05-14 04:32:08.904427] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:54.354 [2024-05-14 04:32:08.904451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:13184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:54.354 [2024-05-14 04:32:08.904462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:54.354 [2024-05-14 04:32:08.909313] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:54.354 [2024-05-14 04:32:08.909339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:3008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:54.354 [2024-05-14 04:32:08.909349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:54.354 [2024-05-14 04:32:08.914372] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:54.354 [2024-05-14 04:32:08.914401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:54.354 [2024-05-14 04:32:08.914413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:54.354 [2024-05-14 04:32:08.919371] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:54.354 [2024-05-14 04:32:08.919399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:54.354 [2024-05-14 04:32:08.919415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:54.354 [2024-05-14 04:32:08.924399] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:54.355 [2024-05-14 04:32:08.924424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:21568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:54.355 [2024-05-14 04:32:08.924434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:54.355 [2024-05-14 04:32:08.929605] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:54.355 [2024-05-14 04:32:08.929630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:23328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:54.355 [2024-05-14 04:32:08.929641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:54.355 [2024-05-14 04:32:08.934789] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:54.355 [2024-05-14 04:32:08.934814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:7456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:54.355 [2024-05-14 04:32:08.934823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:54.615 [2024-05-14 04:32:08.939795] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:54.615 [2024-05-14 04:32:08.939822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:22784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:54.615 [2024-05-14 04:32:08.939833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:54.615 [2024-05-14 04:32:08.944663] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:54.615 [2024-05-14 04:32:08.944688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:54.615 [2024-05-14 04:32:08.944698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:54.615 [2024-05-14 04:32:08.949531] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:54.615 [2024-05-14 04:32:08.949555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:54.615 [2024-05-14 04:32:08.949565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:54.615 [2024-05-14 04:32:08.954618] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:54.615 [2024-05-14 04:32:08.954644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:3904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:54.615 [2024-05-14 04:32:08.954654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:54.615 [2024-05-14 04:32:08.959718] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:54.615 [2024-05-14 04:32:08.959744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:6880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:54.615 [2024-05-14 04:32:08.959754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:54.615 [2024-05-14 04:32:08.964837] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:54.615 [2024-05-14 04:32:08.964862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:54.615 [2024-05-14 04:32:08.964872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:54.615 [2024-05-14 04:32:08.969795] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:54.615 [2024-05-14 04:32:08.969821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:1056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:54.615 [2024-05-14 04:32:08.969831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:54.615 [2024-05-14 04:32:08.974791] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:54.615 [2024-05-14 04:32:08.974817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:12352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:54.615 [2024-05-14 04:32:08.974827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:54.615 [2024-05-14 04:32:08.980146] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:54.615 [2024-05-14 04:32:08.980171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:23008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:54.615 [2024-05-14 04:32:08.980181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:54.615 [2024-05-14 04:32:08.986183] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:54.615 [2024-05-14 04:32:08.986215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:10464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:54.615 [2024-05-14 04:32:08.986224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:54.615 [2024-05-14 04:32:08.991927] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:54.615 [2024-05-14 04:32:08.991953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:18592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:54.615 [2024-05-14 04:32:08.991964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:54.615 [2024-05-14 04:32:08.998391] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:54.615 [2024-05-14 04:32:08.998416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:2048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:54.615 [2024-05-14 04:32:08.998426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:54.615 [2024-05-14 04:32:09.004816] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:54.615 [2024-05-14 04:32:09.004841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:23968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:54.615 [2024-05-14 04:32:09.004851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:54.615 [2024-05-14 04:32:09.011920] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:54.615 [2024-05-14 04:32:09.011947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:19424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:54.615 [2024-05-14 04:32:09.011961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:54.615 [2024-05-14 04:32:09.019708] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:54.615 [2024-05-14 04:32:09.019732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:18400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:54.615 [2024-05-14 04:32:09.019742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:54.615 [2024-05-14 04:32:09.026541] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:54.615 [2024-05-14 04:32:09.026567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:54.615 [2024-05-14 04:32:09.026578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:54.615 [2024-05-14 04:32:09.033386] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:54.615 [2024-05-14 04:32:09.033411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:14208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:54.615 [2024-05-14 04:32:09.033421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:54.615 [2024-05-14 04:32:09.040253] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:54.616 [2024-05-14 04:32:09.040278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:18592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:54.616 [2024-05-14 04:32:09.040288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:54.616 [2024-05-14 04:32:09.046238] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:54.616 [2024-05-14 04:32:09.046263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:22112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:54.616 [2024-05-14 04:32:09.046273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:54.616 [2024-05-14 04:32:09.052737] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:54.616 [2024-05-14 04:32:09.052763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:5472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:54.616 [2024-05-14 04:32:09.052772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:54.616 [2024-05-14 04:32:09.060448] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:54.616 [2024-05-14 04:32:09.060475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:17152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:54.616 [2024-05-14 04:32:09.060484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:54.616 [2024-05-14 04:32:09.067345] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:54.616 [2024-05-14 04:32:09.067370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:22112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:54.616 [2024-05-14 04:32:09.067380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:54.616 [2024-05-14 04:32:09.075144] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:54.616 [2024-05-14 04:32:09.075176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:54.616 [2024-05-14 04:32:09.075193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:54.616 [2024-05-14 04:32:09.080560] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:54.616 [2024-05-14 04:32:09.080584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:54.616 [2024-05-14 04:32:09.080595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:54.616 [2024-05-14 04:32:09.086198] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:54.616 [2024-05-14 04:32:09.086222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:9536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:54.616 [2024-05-14 04:32:09.086232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:54.616 [2024-05-14 04:32:09.091939] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:54.616 [2024-05-14 04:32:09.091963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:54.616 [2024-05-14 04:32:09.091973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:54.616 [2024-05-14 04:32:09.098305] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:54.616 [2024-05-14 04:32:09.098331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:18400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:54.616 [2024-05-14 04:32:09.098350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:54.616 [2024-05-14 04:32:09.104115] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:54.616 [2024-05-14 04:32:09.104139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:10016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:54.616 [2024-05-14 04:32:09.104148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:54.616 [2024-05-14 04:32:09.109972] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:54.616 [2024-05-14 04:32:09.109998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:17952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:54.616 [2024-05-14 04:32:09.110008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:54.616 [2024-05-14 04:32:09.115859] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:54.616 [2024-05-14 04:32:09.115882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:6656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:54.616 [2024-05-14 04:32:09.115892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:54.616 [2024-05-14 04:32:09.121829] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:54.616 [2024-05-14 04:32:09.121854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:23072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:54.616 [2024-05-14 04:32:09.121868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:54.616 [2024-05-14 04:32:09.128782] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:54.616 [2024-05-14 04:32:09.128808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:54.616 [2024-05-14 04:32:09.128818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:54.616 [2024-05-14 04:32:09.136140] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:54.616 [2024-05-14 04:32:09.136168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:54.616 [2024-05-14 04:32:09.136180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:54.616 [2024-05-14 04:32:09.144553] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:54.616 [2024-05-14 04:32:09.144580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:14336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:54.616 [2024-05-14 04:32:09.144591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:54.616 [2024-05-14 04:32:09.151541] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:54.616 [2024-05-14 04:32:09.151569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:54.616 [2024-05-14 04:32:09.151579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:54.616 [2024-05-14 04:32:09.157291] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:54.616 [2024-05-14 04:32:09.157316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:12448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:54.616 [2024-05-14 04:32:09.157326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:54.616 [2024-05-14 04:32:09.163362] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:54.616 [2024-05-14 04:32:09.163386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:6880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:54.616 [2024-05-14 04:32:09.163396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:54.616 [2024-05-14 04:32:09.168906] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:54.616 [2024-05-14 04:32:09.168931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:25184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:54.616 [2024-05-14 04:32:09.168941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:54.616 [2024-05-14 04:32:09.174970] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:54.616 [2024-05-14 04:32:09.174998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:14208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:54.616 [2024-05-14 04:32:09.175009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:54.616 [2024-05-14 04:32:09.179878] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:54.616 [2024-05-14 04:32:09.179911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:14816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:54.616 [2024-05-14 04:32:09.179921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:54.616 [2024-05-14 04:32:09.184671] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:54.616 [2024-05-14 04:32:09.184696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:25504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:54.616 [2024-05-14 04:32:09.184706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:54.616 [2024-05-14 04:32:09.189593] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:54.616 [2024-05-14 04:32:09.189618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:11904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:54.616 [2024-05-14 04:32:09.189627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:54.616 [2024-05-14 04:32:09.194663] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:54.616 [2024-05-14 04:32:09.194687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:54.616 [2024-05-14 04:32:09.194697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:54.616 [2024-05-14 04:32:09.199668] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:54.616 [2024-05-14 04:32:09.199693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:9536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:54.616 [2024-05-14 04:32:09.199702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:54.876 [2024-05-14 04:32:09.204442] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:54.876 [2024-05-14 04:32:09.204467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:19584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:54.876 [2024-05-14 04:32:09.204477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:54.876 [2024-05-14 04:32:09.209336] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:54.876 [2024-05-14 04:32:09.209360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:54.876 [2024-05-14 04:32:09.209370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:54.876 [2024-05-14 04:32:09.214166] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:54.876 [2024-05-14 04:32:09.214196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:54.876 [2024-05-14 04:32:09.214207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:54.876 [2024-05-14 04:32:09.219030] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:54.876 [2024-05-14 04:32:09.219054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:9472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:54.876 [2024-05-14 04:32:09.219064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:54.877 [2024-05-14 04:32:09.223993] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:54.877 [2024-05-14 04:32:09.224017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:1792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:54.877 [2024-05-14 04:32:09.224028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:54.877 [2024-05-14 04:32:09.228825] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:54.877 [2024-05-14 04:32:09.228849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:5344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:54.877 [2024-05-14 04:32:09.228859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:54.877 [2024-05-14 04:32:09.233716] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:54.877 [2024-05-14 04:32:09.233741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:6240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:54.877 [2024-05-14 04:32:09.233750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:54.877 [2024-05-14 04:32:09.238741] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:54.877 [2024-05-14 04:32:09.238764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:5344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:54.877 [2024-05-14 04:32:09.238773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:54.877 [2024-05-14 04:32:09.244180] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:54.877 [2024-05-14 04:32:09.244208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:5792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:54.877 [2024-05-14 04:32:09.244217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:54.877 [2024-05-14 04:32:09.249444] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:54.877 [2024-05-14 04:32:09.249468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:15136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:54.877 [2024-05-14 04:32:09.249477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:54.877 [2024-05-14 04:32:09.254559] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:54.877 [2024-05-14 04:32:09.254583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:17472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:54.877 [2024-05-14 04:32:09.254593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:54.877 [2024-05-14 04:32:09.259863] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:54.877 [2024-05-14 04:32:09.259890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:5824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:54.877 [2024-05-14 04:32:09.259901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:54.877 [2024-05-14 04:32:09.265010] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:54.877 [2024-05-14 04:32:09.265038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:13504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:54.877 [2024-05-14 04:32:09.265048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:54.877 [2024-05-14 04:32:09.269919] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:54.877 [2024-05-14 04:32:09.269947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:23200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:54.877 [2024-05-14 04:32:09.269958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:54.877 [2024-05-14 04:32:09.275065] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:54.877 [2024-05-14 04:32:09.275090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:5504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:54.877 [2024-05-14 04:32:09.275100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:54.877 [2024-05-14 04:32:09.280190] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:54.877 [2024-05-14 04:32:09.280214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:15008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:54.877 [2024-05-14 04:32:09.280224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:54.877 [2024-05-14 04:32:09.284925] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:54.877 [2024-05-14 04:32:09.284948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:18592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:54.877 [2024-05-14 04:32:09.284958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:54.877 [2024-05-14 04:32:09.288859] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:54.877 [2024-05-14 04:32:09.288884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:23104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:54.877 [2024-05-14 04:32:09.288893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:54.877 [2024-05-14 04:32:09.292760] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:54.877 [2024-05-14 04:32:09.292786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:54.877 [2024-05-14 04:32:09.292796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:54.877 [2024-05-14 04:32:09.296967] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:54.877 [2024-05-14 04:32:09.296992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:14976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:54.877 [2024-05-14 04:32:09.297002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:54.877 [2024-05-14 04:32:09.301692] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:54.877 [2024-05-14 04:32:09.301718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:20512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:54.877 [2024-05-14 04:32:09.301728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:54.877 [2024-05-14 04:32:09.306614] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:54.877 [2024-05-14 04:32:09.306639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:10048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:54.877 [2024-05-14 04:32:09.306648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:54.877 [2024-05-14 04:32:09.311677] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:54.877 [2024-05-14 04:32:09.311700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:14368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:54.877 [2024-05-14 04:32:09.311710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:54.877 [2024-05-14 04:32:09.316593] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:54.877 [2024-05-14 04:32:09.316615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:23104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:54.877 [2024-05-14 04:32:09.316624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:54.877 [2024-05-14 04:32:09.321636] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:54.877 [2024-05-14 04:32:09.321661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:20032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:54.877 [2024-05-14 04:32:09.321671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:54.877 [2024-05-14 04:32:09.326473] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:54.877 [2024-05-14 04:32:09.326506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:54.877 [2024-05-14 04:32:09.326515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:54.877 [2024-05-14 04:32:09.331535] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:54.877 [2024-05-14 04:32:09.331559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:6240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:54.877 [2024-05-14 04:32:09.331569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:54.877 [2024-05-14 04:32:09.336126] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:54.877 [2024-05-14 04:32:09.336150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:7104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:54.877 [2024-05-14 04:32:09.336160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:54.877 [2024-05-14 04:32:09.340442] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:54.877 [2024-05-14 04:32:09.340467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:54.877 [2024-05-14 04:32:09.340477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:54.877 [2024-05-14 04:32:09.344705] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:54.877 [2024-05-14 04:32:09.344734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:54.877 [2024-05-14 04:32:09.344744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:54.877 [2024-05-14 04:32:09.349603] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:54.877 [2024-05-14 04:32:09.349629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:6080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:54.878 [2024-05-14 04:32:09.349638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:54.878 [2024-05-14 04:32:09.354614] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:54.878 [2024-05-14 04:32:09.354638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:18208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:54.878 [2024-05-14 04:32:09.354647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:54.878 [2024-05-14 04:32:09.359326] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:54.878 [2024-05-14 04:32:09.359349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:7872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:54.878 [2024-05-14 04:32:09.359359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:54.878 [2024-05-14 04:32:09.364135] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:54.878 [2024-05-14 04:32:09.364159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:12384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:54.878 [2024-05-14 04:32:09.364168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:54.878 [2024-05-14 04:32:09.369039] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:54.878 [2024-05-14 04:32:09.369063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:15584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:54.878 [2024-05-14 04:32:09.369073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:54.878 [2024-05-14 04:32:09.373909] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:54.878 [2024-05-14 04:32:09.373934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:14880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:54.878 [2024-05-14 04:32:09.373944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:54.878 [2024-05-14 04:32:09.379060] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:54.878 [2024-05-14 04:32:09.379083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:6624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:54.878 [2024-05-14 04:32:09.379093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:54.878 [2024-05-14 04:32:09.384219] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:54.878 [2024-05-14 04:32:09.384244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:17216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:54.878 [2024-05-14 04:32:09.384254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:54.878 [2024-05-14 04:32:09.389257] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:54.878 [2024-05-14 04:32:09.389281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:22560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:54.878 [2024-05-14 04:32:09.389291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:54.878 [2024-05-14 04:32:09.394139] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:54.878 [2024-05-14 04:32:09.394162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:13472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:54.878 [2024-05-14 04:32:09.394171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:54.878 [2024-05-14 04:32:09.398768] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:54.878 [2024-05-14 04:32:09.398793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:2464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:54.878 [2024-05-14 04:32:09.398803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:54.878 [2024-05-14 04:32:09.402853] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:54.878 [2024-05-14 04:32:09.402879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:16928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:54.878 [2024-05-14 04:32:09.402890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:54.878 [2024-05-14 04:32:09.406719] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:54.878 [2024-05-14 04:32:09.406744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:54.878 [2024-05-14 04:32:09.406754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:54.878 [2024-05-14 04:32:09.410279] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:54.878 [2024-05-14 04:32:09.410305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:21728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:54.878 [2024-05-14 04:32:09.410315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:54.878 [2024-05-14 04:32:09.413898] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:54.878 [2024-05-14 04:32:09.413922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:14176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:54.878 [2024-05-14 04:32:09.413932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:54.878 [2024-05-14 04:32:09.416343] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:54.878 [2024-05-14 04:32:09.416365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:7616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:54.878 [2024-05-14 04:32:09.416374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:54.878 [2024-05-14 04:32:09.420487] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:54.878 [2024-05-14 04:32:09.420510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:54.878 [2024-05-14 04:32:09.420525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:54.878 [2024-05-14 04:32:09.425282] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:54.878 [2024-05-14 04:32:09.425305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:17056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:54.878 [2024-05-14 04:32:09.425314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:54.878 [2024-05-14 04:32:09.430168] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:54.878 [2024-05-14 04:32:09.430195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:18048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:54.878 [2024-05-14 04:32:09.430205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:54.878 [2024-05-14 04:32:09.435023] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:54.878 [2024-05-14 04:32:09.435045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:22784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:54.878 [2024-05-14 04:32:09.435055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:54.878 [2024-05-14 04:32:09.440087] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:54.878 [2024-05-14 04:32:09.440114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:10368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:54.878 [2024-05-14 04:32:09.440124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:54.878 [2024-05-14 04:32:09.444863] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:54.878 [2024-05-14 04:32:09.444886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:54.878 [2024-05-14 04:32:09.444895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:54.878 [2024-05-14 04:32:09.449827] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:54.878 [2024-05-14 04:32:09.449850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:22688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:54.878 [2024-05-14 04:32:09.449860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:54.878 [2024-05-14 04:32:09.454676] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:54.878 [2024-05-14 04:32:09.454698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:54.878 [2024-05-14 04:32:09.454708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:54.878 [2024-05-14 04:32:09.459768] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:54.878 [2024-05-14 04:32:09.459791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:6240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:54.878 [2024-05-14 04:32:09.459801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:55.138 [2024-05-14 04:32:09.464721] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:55.138 [2024-05-14 04:32:09.464745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:11328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:55.138 [2024-05-14 04:32:09.464755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:55.138 [2024-05-14 04:32:09.469564] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:55.138 [2024-05-14 04:32:09.469587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:4352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:55.138 [2024-05-14 04:32:09.469597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:55.138 [2024-05-14 04:32:09.474656] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:55.138 [2024-05-14 04:32:09.474684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:18560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:55.138 [2024-05-14 04:32:09.474696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:55.138 [2024-05-14 04:32:09.479241] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:55.138 [2024-05-14 04:32:09.479268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:25216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:55.138 [2024-05-14 04:32:09.479279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:55.138 [2024-05-14 04:32:09.482936] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:55.138 [2024-05-14 04:32:09.482960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:3616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:55.138 [2024-05-14 04:32:09.482970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:55.139 [2024-05-14 04:32:09.486715] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:55.139 [2024-05-14 04:32:09.486739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:2336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:55.139 [2024-05-14 04:32:09.486749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:55.139 [2024-05-14 04:32:09.490479] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:55.139 [2024-05-14 04:32:09.490507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:9376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:55.139 [2024-05-14 04:32:09.490520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:55.139 [2024-05-14 04:32:09.495033] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:55.139 [2024-05-14 04:32:09.495059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:55.139 [2024-05-14 04:32:09.495069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:55.139 [2024-05-14 04:32:09.499811] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:55.139 [2024-05-14 04:32:09.499835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:55.139 [2024-05-14 04:32:09.499850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:55.139 [2024-05-14 04:32:09.504525] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:55.139 [2024-05-14 04:32:09.504550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:55.139 [2024-05-14 04:32:09.504560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:55.139 [2024-05-14 04:32:09.509274] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:55.139 [2024-05-14 04:32:09.509298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:22560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:55.139 [2024-05-14 04:32:09.509308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:55.139 [2024-05-14 04:32:09.514137] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:55.139 [2024-05-14 04:32:09.514160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:9536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:55.139 [2024-05-14 04:32:09.514169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:55.139 [2024-05-14 04:32:09.519223] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:55.139 [2024-05-14 04:32:09.519247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:9088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:55.139 [2024-05-14 04:32:09.519257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:55.139 [2024-05-14 04:32:09.524086] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:55.139 [2024-05-14 04:32:09.524109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:6176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:55.139 [2024-05-14 04:32:09.524119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:55.139 [2024-05-14 04:32:09.528870] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:55.139 [2024-05-14 04:32:09.528897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:1056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:55.139 [2024-05-14 04:32:09.528908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:55.139 [2024-05-14 04:32:09.533695] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:55.139 [2024-05-14 04:32:09.533720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:18688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:55.139 [2024-05-14 04:32:09.533730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:55.139 [2024-05-14 04:32:09.538554] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:55.139 [2024-05-14 04:32:09.538578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:15584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:55.139 [2024-05-14 04:32:09.538587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:55.139 [2024-05-14 04:32:09.543543] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:55.139 [2024-05-14 04:32:09.543567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:7200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:55.139 [2024-05-14 04:32:09.543577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:55.139 [2024-05-14 04:32:09.548019] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:55.139 [2024-05-14 04:32:09.548043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:18304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:55.139 [2024-05-14 04:32:09.548052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:55.139 [2024-05-14 04:32:09.552244] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:55.139 [2024-05-14 04:32:09.552271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:5248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:55.139 [2024-05-14 04:32:09.552282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:55.139 [2024-05-14 04:32:09.556601] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:55.139 [2024-05-14 04:32:09.556627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:10080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:55.139 [2024-05-14 04:32:09.556638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:55.139 [2024-05-14 04:32:09.561468] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:55.139 [2024-05-14 04:32:09.561494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:55.139 [2024-05-14 04:32:09.561504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:55.139 [2024-05-14 04:32:09.566375] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:55.139 [2024-05-14 04:32:09.566398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:55.139 [2024-05-14 04:32:09.566408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:55.139 [2024-05-14 04:32:09.571229] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:55.139 [2024-05-14 04:32:09.571255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:55.139 [2024-05-14 04:32:09.571265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:55.139 [2024-05-14 04:32:09.576083] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:55.139 [2024-05-14 04:32:09.576106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:24288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:55.139 [2024-05-14 04:32:09.576116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:55.139 [2024-05-14 04:32:09.580966] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:55.139 [2024-05-14 04:32:09.580994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:55.139 [2024-05-14 04:32:09.581010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:55.139 [2024-05-14 04:32:09.585760] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:55.139 [2024-05-14 04:32:09.585786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:14848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:55.139 [2024-05-14 04:32:09.585796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:55.139 [2024-05-14 04:32:09.590492] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:55.139 [2024-05-14 04:32:09.590515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:23456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:55.139 [2024-05-14 04:32:09.590525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:55.139 [2024-05-14 04:32:09.595853] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:55.139 [2024-05-14 04:32:09.595878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:12640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:55.139 [2024-05-14 04:32:09.595888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:55.139 [2024-05-14 04:32:09.601343] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:55.139 [2024-05-14 04:32:09.601367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:18560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:55.139 [2024-05-14 04:32:09.601377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:55.139 [2024-05-14 04:32:09.606429] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:55.139 [2024-05-14 04:32:09.606453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:12544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:55.139 [2024-05-14 04:32:09.606462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:55.139 [2024-05-14 04:32:09.611449] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:55.139 [2024-05-14 04:32:09.611474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:9696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:55.139 [2024-05-14 04:32:09.611483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:55.139 [2024-05-14 04:32:09.616367] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:55.139 [2024-05-14 04:32:09.616391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:23200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:55.140 [2024-05-14 04:32:09.616400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:55.140 [2024-05-14 04:32:09.621293] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:55.140 [2024-05-14 04:32:09.621317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:19232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:55.140 [2024-05-14 04:32:09.621326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:55.140 [2024-05-14 04:32:09.626053] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:55.140 [2024-05-14 04:32:09.626084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:14112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:55.140 [2024-05-14 04:32:09.626093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:55.140 [2024-05-14 04:32:09.630946] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:55.140 [2024-05-14 04:32:09.630968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:11776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:55.140 [2024-05-14 04:32:09.630978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:55.140 [2024-05-14 04:32:09.634992] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:55.140 [2024-05-14 04:32:09.635016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:3264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:55.140 [2024-05-14 04:32:09.635025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:55.140 [2024-05-14 04:32:09.639475] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:55.140 [2024-05-14 04:32:09.639499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:10464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:55.140 [2024-05-14 04:32:09.639509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:55.140 [2024-05-14 04:32:09.644459] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:55.140 [2024-05-14 04:32:09.644482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:55.140 [2024-05-14 04:32:09.644491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:55.140 [2024-05-14 04:32:09.648775] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:55.140 [2024-05-14 04:32:09.648799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:55.140 [2024-05-14 04:32:09.648809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:55.140 [2024-05-14 04:32:09.653016] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:55.140 [2024-05-14 04:32:09.653040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:55.140 [2024-05-14 04:32:09.653051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:55.140 [2024-05-14 04:32:09.657418] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:55.140 [2024-05-14 04:32:09.657442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:13632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:55.140 [2024-05-14 04:32:09.657451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:55.140 [2024-05-14 04:32:09.662272] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:55.140 [2024-05-14 04:32:09.662295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:9824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:55.140 [2024-05-14 04:32:09.662309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:55.140 [2024-05-14 04:32:09.667201] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:55.140 [2024-05-14 04:32:09.667226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:5952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:55.140 [2024-05-14 04:32:09.667236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:55.140 [2024-05-14 04:32:09.672161] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:55.140 [2024-05-14 04:32:09.672190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:55.140 [2024-05-14 04:32:09.672200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:55.140 [2024-05-14 04:32:09.677105] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:55.140 [2024-05-14 04:32:09.677128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:10720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:55.140 [2024-05-14 04:32:09.677137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:55.140 [2024-05-14 04:32:09.681822] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:55.140 [2024-05-14 04:32:09.681846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:55.140 [2024-05-14 04:32:09.681856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:55.140 [2024-05-14 04:32:09.686931] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:55.140 [2024-05-14 04:32:09.686953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:11104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:55.140 [2024-05-14 04:32:09.686963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:55.140 [2024-05-14 04:32:09.691903] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:55.140 [2024-05-14 04:32:09.691926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:24928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:55.140 [2024-05-14 04:32:09.691936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:55.140 [2024-05-14 04:32:09.696689] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:55.140 [2024-05-14 04:32:09.696713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:19968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:55.140 [2024-05-14 04:32:09.696724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:55.140 [2024-05-14 04:32:09.701716] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:55.140 [2024-05-14 04:32:09.701740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:17216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:55.140 [2024-05-14 04:32:09.701750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:55.140 [2024-05-14 04:32:09.707082] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:55.140 [2024-05-14 04:32:09.707111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:3808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:55.140 [2024-05-14 04:32:09.707121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:55.140 [2024-05-14 04:32:09.712066] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:55.140 [2024-05-14 04:32:09.712089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:17984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:55.140 [2024-05-14 04:32:09.712099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:55.140 [2024-05-14 04:32:09.717863] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:55.140 [2024-05-14 04:32:09.717886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:11008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:55.140 [2024-05-14 04:32:09.717897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:55.140 [2024-05-14 04:32:09.723680] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:55.140 [2024-05-14 04:32:09.723703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:10976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:55.140 [2024-05-14 04:32:09.723713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:55.400 [2024-05-14 04:32:09.729408] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:55.400 [2024-05-14 04:32:09.729432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:8800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:55.400 [2024-05-14 04:32:09.729450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:55.400 [2024-05-14 04:32:09.736009] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:55.400 [2024-05-14 04:32:09.736035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:21408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:55.400 [2024-05-14 04:32:09.736045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:55.400 [2024-05-14 04:32:09.742664] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:55.400 [2024-05-14 04:32:09.742687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:19584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:55.400 [2024-05-14 04:32:09.742697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:55.400 [2024-05-14 04:32:09.749819] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:55.400 [2024-05-14 04:32:09.749842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:5472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:55.400 [2024-05-14 04:32:09.749852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:55.400 [2024-05-14 04:32:09.756697] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:55.400 [2024-05-14 04:32:09.756720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:22592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:55.400 [2024-05-14 04:32:09.756729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:55.400 [2024-05-14 04:32:09.763295] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:55.400 [2024-05-14 04:32:09.763319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:19264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:55.400 [2024-05-14 04:32:09.763329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:55.400 [2024-05-14 04:32:09.769271] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:55.400 [2024-05-14 04:32:09.769293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:2016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:55.400 [2024-05-14 04:32:09.769302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:55.400 [2024-05-14 04:32:09.776105] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:55.400 [2024-05-14 04:32:09.776129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:14176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:55.400 [2024-05-14 04:32:09.776139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:55.400 [2024-05-14 04:32:09.782857] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:55.400 [2024-05-14 04:32:09.782880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:5344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:55.400 [2024-05-14 04:32:09.782890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:55.400 [2024-05-14 04:32:09.789182] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:55.400 [2024-05-14 04:32:09.789213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:11616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:55.400 [2024-05-14 04:32:09.789222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:55.400 [2024-05-14 04:32:09.794931] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:55.400 [2024-05-14 04:32:09.794954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:11424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:55.400 [2024-05-14 04:32:09.794964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:55.400 [2024-05-14 04:32:09.799509] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:55.400 [2024-05-14 04:32:09.799532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:12320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:55.400 [2024-05-14 04:32:09.799542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:55.400 [2024-05-14 04:32:09.804289] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:55.400 [2024-05-14 04:32:09.804315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:55.400 [2024-05-14 04:32:09.804326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:55.400 [2024-05-14 04:32:09.809730] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:55.400 [2024-05-14 04:32:09.809761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:20416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:55.400 [2024-05-14 04:32:09.809771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:55.400 [2024-05-14 04:32:09.814799] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:55.400 [2024-05-14 04:32:09.814823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:55.400 [2024-05-14 04:32:09.814833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:55.400 [2024-05-14 04:32:09.819823] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:55.400 [2024-05-14 04:32:09.819846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:3136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:55.400 [2024-05-14 04:32:09.819857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:55.401 [2024-05-14 04:32:09.825211] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:55.401 [2024-05-14 04:32:09.825234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:5824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:55.401 [2024-05-14 04:32:09.825244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:55.401 [2024-05-14 04:32:09.830087] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:55.401 [2024-05-14 04:32:09.830111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:18048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:55.401 [2024-05-14 04:32:09.830121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:55.401 [2024-05-14 04:32:09.835131] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:55.401 [2024-05-14 04:32:09.835154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:21952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:55.401 [2024-05-14 04:32:09.835164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:55.401 [2024-05-14 04:32:09.840765] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:55.401 [2024-05-14 04:32:09.840792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:7904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:55.401 [2024-05-14 04:32:09.840806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:55.401 [2024-05-14 04:32:09.844465] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:55.401 [2024-05-14 04:32:09.844489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:6784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:55.401 [2024-05-14 04:32:09.844499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:55.401 [2024-05-14 04:32:09.849928] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:55.401 [2024-05-14 04:32:09.849951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:22368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:55.401 [2024-05-14 04:32:09.849961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:55.401 [2024-05-14 04:32:09.854995] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:55.401 [2024-05-14 04:32:09.855016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:7904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:55.401 [2024-05-14 04:32:09.855026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:55.401 [2024-05-14 04:32:09.860387] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:55.401 [2024-05-14 04:32:09.860411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:55.401 [2024-05-14 04:32:09.860421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:55.401 [2024-05-14 04:32:09.865847] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:55.401 [2024-05-14 04:32:09.865870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:3808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:55.401 [2024-05-14 04:32:09.865880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:55.401 [2024-05-14 04:32:09.871326] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:55.401 [2024-05-14 04:32:09.871350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:22144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:55.401 [2024-05-14 04:32:09.871360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:55.401 [2024-05-14 04:32:09.877038] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:55.401 [2024-05-14 04:32:09.877061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:3904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:55.401 [2024-05-14 04:32:09.877071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:55.401 [2024-05-14 04:32:09.882459] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:55.401 [2024-05-14 04:32:09.882482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:7552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:55.401 [2024-05-14 04:32:09.882491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:55.401 [2024-05-14 04:32:09.888481] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:55.401 [2024-05-14 04:32:09.888503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:1632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:55.401 [2024-05-14 04:32:09.888513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:55.401 [2024-05-14 04:32:09.894164] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:55.401 [2024-05-14 04:32:09.894191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:5504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:55.401 [2024-05-14 04:32:09.894201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:55.401 [2024-05-14 04:32:09.899868] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:55.401 [2024-05-14 04:32:09.899895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:0 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:55.401 [2024-05-14 04:32:09.899905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:55.401 [2024-05-14 04:32:09.905640] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:55.401 [2024-05-14 04:32:09.905663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:20800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:55.401 [2024-05-14 04:32:09.905672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:55.401 [2024-05-14 04:32:09.912660] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:55.401 [2024-05-14 04:32:09.912683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:15712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:55.401 [2024-05-14 04:32:09.912693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:55.401 [2024-05-14 04:32:09.917343] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:55.401 [2024-05-14 04:32:09.917366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:55.401 [2024-05-14 04:32:09.917376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:55.401 [2024-05-14 04:32:09.921857] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:55.401 [2024-05-14 04:32:09.921882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:4256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:55.401 [2024-05-14 04:32:09.921891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:55.401 [2024-05-14 04:32:09.926815] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:55.401 [2024-05-14 04:32:09.926837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:16160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:55.401 [2024-05-14 04:32:09.926847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:55.401 [2024-05-14 04:32:09.931699] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:55.401 [2024-05-14 04:32:09.931722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:20000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:55.401 [2024-05-14 04:32:09.931732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:55.401 [2024-05-14 04:32:09.936604] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:55.401 [2024-05-14 04:32:09.936626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:19872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:55.401 [2024-05-14 04:32:09.936636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:55.401 [2024-05-14 04:32:09.941678] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:55.401 [2024-05-14 04:32:09.941702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:18080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:55.401 [2024-05-14 04:32:09.941712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:55.401 [2024-05-14 04:32:09.947108] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:55.401 [2024-05-14 04:32:09.947131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:9280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:55.401 [2024-05-14 04:32:09.947140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:55.401 [2024-05-14 04:32:09.952487] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:55.401 [2024-05-14 04:32:09.952510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:9536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:55.401 [2024-05-14 04:32:09.952520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:55.401 [2024-05-14 04:32:09.957422] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:55.401 [2024-05-14 04:32:09.957444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:1152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:55.401 [2024-05-14 04:32:09.957453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:55.401 [2024-05-14 04:32:09.962380] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:55.401 [2024-05-14 04:32:09.962403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:14752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:55.401 [2024-05-14 04:32:09.962413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:55.401 [2024-05-14 04:32:09.967260] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:55.401 [2024-05-14 04:32:09.967290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:55.402 [2024-05-14 04:32:09.967301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:55.402 [2024-05-14 04:32:09.972427] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:55.402 [2024-05-14 04:32:09.972452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:16704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:55.402 [2024-05-14 04:32:09.972462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:55.402 [2024-05-14 04:32:09.977922] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:55.402 [2024-05-14 04:32:09.977947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:3904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:55.402 [2024-05-14 04:32:09.977959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:55.402 [2024-05-14 04:32:09.983621] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:55.402 [2024-05-14 04:32:09.983644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:13984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:55.402 [2024-05-14 04:32:09.983654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:55.662 [2024-05-14 04:32:09.988815] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:55.662 [2024-05-14 04:32:09.988838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:1440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:55.662 [2024-05-14 04:32:09.988853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:55.662 [2024-05-14 04:32:09.993862] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:55.662 [2024-05-14 04:32:09.993885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:13920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:55.662 [2024-05-14 04:32:09.993895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:55.662 [2024-05-14 04:32:09.999410] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:55.662 [2024-05-14 04:32:09.999433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:11712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:55.662 [2024-05-14 04:32:09.999443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:55.662 [2024-05-14 04:32:10.006090] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:55.662 [2024-05-14 04:32:10.006122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:55.662 [2024-05-14 04:32:10.006135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:55.662 [2024-05-14 04:32:10.012873] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:55.662 [2024-05-14 04:32:10.012900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:55.662 [2024-05-14 04:32:10.012913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:55.662 [2024-05-14 04:32:10.019037] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:55.662 [2024-05-14 04:32:10.019065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:19232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:55.662 [2024-05-14 04:32:10.019076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:55.662 [2024-05-14 04:32:10.025903] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:55.662 [2024-05-14 04:32:10.025930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:55.662 [2024-05-14 04:32:10.025942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:55.662 [2024-05-14 04:32:10.032052] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:55.662 [2024-05-14 04:32:10.032078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:22336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:55.662 [2024-05-14 04:32:10.032089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:55.662 [2024-05-14 04:32:10.037693] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:55.662 [2024-05-14 04:32:10.037717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:24832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:55.662 [2024-05-14 04:32:10.037728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:55.662 [2024-05-14 04:32:10.043192] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:55.662 [2024-05-14 04:32:10.043218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:16384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:55.662 [2024-05-14 04:32:10.043228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:55.662 [2024-05-14 04:32:10.049142] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:55.662 [2024-05-14 04:32:10.049169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:55.663 [2024-05-14 04:32:10.049181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:55.663 [2024-05-14 04:32:10.054586] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:55.663 [2024-05-14 04:32:10.054612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:19392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:55.663 [2024-05-14 04:32:10.054622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:55.663 [2024-05-14 04:32:10.059873] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:55.663 [2024-05-14 04:32:10.059898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:2560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:55.663 [2024-05-14 04:32:10.059909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:55.663 [2024-05-14 04:32:10.064785] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:55.663 [2024-05-14 04:32:10.064809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:20320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:55.663 [2024-05-14 04:32:10.064819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:55.663 [2024-05-14 04:32:10.069514] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:55.663 [2024-05-14 04:32:10.069536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:1312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:55.663 [2024-05-14 04:32:10.069546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:55.663 [2024-05-14 04:32:10.074294] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:55.663 [2024-05-14 04:32:10.074317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:22560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:55.663 [2024-05-14 04:32:10.074327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:55.663 [2024-05-14 04:32:10.079111] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:55.663 [2024-05-14 04:32:10.079134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:3520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:55.663 [2024-05-14 04:32:10.079144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:55.663 [2024-05-14 04:32:10.084126] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:55.663 [2024-05-14 04:32:10.084151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:18880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:55.663 [2024-05-14 04:32:10.084166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:55.663 [2024-05-14 04:32:10.089230] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:55.663 [2024-05-14 04:32:10.089255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:15872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:55.663 [2024-05-14 04:32:10.089265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:55.663 [2024-05-14 04:32:10.094129] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:55.663 [2024-05-14 04:32:10.094154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:55.663 [2024-05-14 04:32:10.094164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:55.663 [2024-05-14 04:32:10.099249] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:55.663 [2024-05-14 04:32:10.099272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:15168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:55.663 [2024-05-14 04:32:10.099281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:55.663 [2024-05-14 04:32:10.104200] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:55.663 [2024-05-14 04:32:10.104225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:21888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:55.663 [2024-05-14 04:32:10.104237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:55.663 [2024-05-14 04:32:10.110166] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:55.663 [2024-05-14 04:32:10.110196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:10336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:55.663 [2024-05-14 04:32:10.110208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:55.663 [2024-05-14 04:32:10.115371] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:55.663 [2024-05-14 04:32:10.115394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:17600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:55.663 [2024-05-14 04:32:10.115405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:55.663 [2024-05-14 04:32:10.120417] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:55.663 [2024-05-14 04:32:10.120440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:22368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:55.663 [2024-05-14 04:32:10.120450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:55.663 [2024-05-14 04:32:10.125416] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:55.663 [2024-05-14 04:32:10.125440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:14080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:55.663 [2024-05-14 04:32:10.125450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:55.663 [2024-05-14 04:32:10.130978] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:55.663 [2024-05-14 04:32:10.131001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:13312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:55.663 [2024-05-14 04:32:10.131011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:55.663 [2024-05-14 04:32:10.136440] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:55.663 [2024-05-14 04:32:10.136463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:24000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:55.663 [2024-05-14 04:32:10.136472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:55.663 [2024-05-14 04:32:10.141711] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:55.663 [2024-05-14 04:32:10.141735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:15680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:55.663 [2024-05-14 04:32:10.141744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:55.663 [2024-05-14 04:32:10.146431] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:55.663 [2024-05-14 04:32:10.146456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:23776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:55.663 [2024-05-14 04:32:10.146465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:55.663 [2024-05-14 04:32:10.151484] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:55.663 [2024-05-14 04:32:10.151512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:21248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:55.663 [2024-05-14 04:32:10.151524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:55.663 [2024-05-14 04:32:10.156841] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:55.663 [2024-05-14 04:32:10.156866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:2208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:55.663 [2024-05-14 04:32:10.156877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:55.663 [2024-05-14 04:32:10.162211] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:55.663 [2024-05-14 04:32:10.162234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:55.663 [2024-05-14 04:32:10.162244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:55.663 [2024-05-14 04:32:10.167367] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:55.663 [2024-05-14 04:32:10.167391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:4992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:55.663 [2024-05-14 04:32:10.167401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:55.663 [2024-05-14 04:32:10.172329] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:55.663 [2024-05-14 04:32:10.172356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:10592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:55.663 [2024-05-14 04:32:10.172371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:55.663 [2024-05-14 04:32:10.177318] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:55.663 [2024-05-14 04:32:10.177344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:15488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:55.663 [2024-05-14 04:32:10.177355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:55.663 [2024-05-14 04:32:10.182405] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:55.663 [2024-05-14 04:32:10.182432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:17216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:55.663 [2024-05-14 04:32:10.182442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:55.663 [2024-05-14 04:32:10.186867] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:55.663 [2024-05-14 04:32:10.186892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:55.663 [2024-05-14 04:32:10.186901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:55.664 [2024-05-14 04:32:10.191044] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:55.664 [2024-05-14 04:32:10.191070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:9408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:55.664 [2024-05-14 04:32:10.191080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:55.664 [2024-05-14 04:32:10.196147] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:55.664 [2024-05-14 04:32:10.196173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:20224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:55.664 [2024-05-14 04:32:10.196183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:55.664 [2024-05-14 04:32:10.201613] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:55.664 [2024-05-14 04:32:10.201639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:19296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:55.664 [2024-05-14 04:32:10.201649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:55.664 [2024-05-14 04:32:10.206942] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:55.664 [2024-05-14 04:32:10.206967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:55.664 [2024-05-14 04:32:10.206976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:55.664 [2024-05-14 04:32:10.212228] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:55.664 [2024-05-14 04:32:10.212252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:21856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:55.664 [2024-05-14 04:32:10.212261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:55.664 [2024-05-14 04:32:10.217478] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:55.664 [2024-05-14 04:32:10.217504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:55.664 [2024-05-14 04:32:10.217514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:55.664 [2024-05-14 04:32:10.222891] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:55.664 [2024-05-14 04:32:10.222915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:19680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:55.664 [2024-05-14 04:32:10.222925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:55.664 [2024-05-14 04:32:10.228334] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:55.664 [2024-05-14 04:32:10.228358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:6560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:55.664 [2024-05-14 04:32:10.228368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:55.664 [2024-05-14 04:32:10.233586] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:55.664 [2024-05-14 04:32:10.233611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:17696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:55.664 [2024-05-14 04:32:10.233620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:55.664 [2024-05-14 04:32:10.238966] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:55.664 [2024-05-14 04:32:10.238990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:22464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:55.664 [2024-05-14 04:32:10.239000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:55.664 [2024-05-14 04:32:10.244438] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:55.664 [2024-05-14 04:32:10.244462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:55.664 [2024-05-14 04:32:10.244473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:55.925 [2024-05-14 04:32:10.249876] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:55.925 [2024-05-14 04:32:10.249902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:9088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:55.925 [2024-05-14 04:32:10.249911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:55.925 [2024-05-14 04:32:10.255148] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:55.925 [2024-05-14 04:32:10.255173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:10208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:55.925 [2024-05-14 04:32:10.255183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:55.925 [2024-05-14 04:32:10.260502] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:55.925 [2024-05-14 04:32:10.260533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:15072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:55.925 [2024-05-14 04:32:10.260551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:55.925 [2024-05-14 04:32:10.266050] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:55.925 [2024-05-14 04:32:10.266077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:10688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:55.925 [2024-05-14 04:32:10.266088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:55.925 [2024-05-14 04:32:10.271293] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:55.925 [2024-05-14 04:32:10.271320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:55.925 [2024-05-14 04:32:10.271331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:55.925 [2024-05-14 04:32:10.276564] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:55.925 [2024-05-14 04:32:10.276590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:10112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:55.925 [2024-05-14 04:32:10.276600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:55.925 [2024-05-14 04:32:10.280703] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:55.925 [2024-05-14 04:32:10.280727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:55.925 [2024-05-14 04:32:10.280737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:55.925 [2024-05-14 04:32:10.286058] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:55.925 [2024-05-14 04:32:10.286082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:14592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:55.925 [2024-05-14 04:32:10.286092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:55.925 [2024-05-14 04:32:10.291307] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:55.925 [2024-05-14 04:32:10.291331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:55.925 [2024-05-14 04:32:10.291341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:55.925 [2024-05-14 04:32:10.296695] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:55.925 [2024-05-14 04:32:10.296718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:22496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:55.925 [2024-05-14 04:32:10.296728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:55.925 [2024-05-14 04:32:10.301999] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:55.925 [2024-05-14 04:32:10.302023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:21888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:55.925 [2024-05-14 04:32:10.302033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:55.925 [2024-05-14 04:32:10.307154] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:55.925 [2024-05-14 04:32:10.307182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:7488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:55.925 [2024-05-14 04:32:10.307196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:55.925 [2024-05-14 04:32:10.312322] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:55.925 [2024-05-14 04:32:10.312346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:10720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:55.925 [2024-05-14 04:32:10.312357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:55.925 [2024-05-14 04:32:10.317853] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:55.925 [2024-05-14 04:32:10.317877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:2880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:55.925 [2024-05-14 04:32:10.317887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:55.925 [2024-05-14 04:32:10.322367] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:55.925 [2024-05-14 04:32:10.322395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:7648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:55.925 [2024-05-14 04:32:10.322407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:55.925 [2024-05-14 04:32:10.326723] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:55.925 [2024-05-14 04:32:10.326752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:11168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:55.925 [2024-05-14 04:32:10.326766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:55.925 [2024-05-14 04:32:10.331479] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:55.925 [2024-05-14 04:32:10.331508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:6624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:55.926 [2024-05-14 04:32:10.331521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:55.926 [2024-05-14 04:32:10.336501] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:55.926 [2024-05-14 04:32:10.336525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:55.926 [2024-05-14 04:32:10.336535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:55.926 [2024-05-14 04:32:10.341343] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:55.926 [2024-05-14 04:32:10.341367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:14080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:55.926 [2024-05-14 04:32:10.341377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:55.926 [2024-05-14 04:32:10.346014] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:55.926 [2024-05-14 04:32:10.346039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:55.926 [2024-05-14 04:32:10.346054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:55.926 [2024-05-14 04:32:10.350830] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:55.926 [2024-05-14 04:32:10.350854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:7296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:55.926 [2024-05-14 04:32:10.350864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:55.926 [2024-05-14 04:32:10.355784] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:55.926 [2024-05-14 04:32:10.355808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:3104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:55.926 [2024-05-14 04:32:10.355817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:55.926 [2024-05-14 04:32:10.360730] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:55.926 [2024-05-14 04:32:10.360754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:11840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:55.926 [2024-05-14 04:32:10.360763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:55.926 [2024-05-14 04:32:10.365599] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:55.926 [2024-05-14 04:32:10.365623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:1056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:55.926 [2024-05-14 04:32:10.365633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:55.926 [2024-05-14 04:32:10.370104] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:55.926 [2024-05-14 04:32:10.370127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:55.926 [2024-05-14 04:32:10.370137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:55.926 [2024-05-14 04:32:10.374989] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:55.926 [2024-05-14 04:32:10.375015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:19328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:55.926 [2024-05-14 04:32:10.375026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:55.926 [2024-05-14 04:32:10.380012] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:55.926 [2024-05-14 04:32:10.380039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:15136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:55.926 [2024-05-14 04:32:10.380049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:55.926 [2024-05-14 04:32:10.384796] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:55.926 [2024-05-14 04:32:10.384822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:3648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:55.926 [2024-05-14 04:32:10.384831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:55.926 [2024-05-14 04:32:10.389704] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:55.926 [2024-05-14 04:32:10.389732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:15744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:55.926 [2024-05-14 04:32:10.389744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:55.926 [2024-05-14 04:32:10.394515] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:55.926 [2024-05-14 04:32:10.394540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:2464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:55.926 [2024-05-14 04:32:10.394552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:55.926 [2024-05-14 04:32:10.399291] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:55.926 [2024-05-14 04:32:10.399315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:19168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:55.926 [2024-05-14 04:32:10.399325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:55.926 [2024-05-14 04:32:10.404082] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:55.926 [2024-05-14 04:32:10.404107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:25120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:55.926 [2024-05-14 04:32:10.404117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:55.926 [2024-05-14 04:32:10.408891] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:55.926 [2024-05-14 04:32:10.408916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:7840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:55.926 [2024-05-14 04:32:10.408934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:55.926 [2024-05-14 04:32:10.413857] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:55.926 [2024-05-14 04:32:10.413881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:17920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:55.926 [2024-05-14 04:32:10.413892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:55.926 [2024-05-14 04:32:10.418757] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:55.926 [2024-05-14 04:32:10.418781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:19712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:55.926 [2024-05-14 04:32:10.418791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:55.926 [2024-05-14 04:32:10.423720] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:55.926 [2024-05-14 04:32:10.423744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:11968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:55.926 [2024-05-14 04:32:10.423754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:55.926 [2024-05-14 04:32:10.428657] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:55.926 [2024-05-14 04:32:10.428681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:19520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:55.926 [2024-05-14 04:32:10.428696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:55.926 [2024-05-14 04:32:10.433458] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:55.926 [2024-05-14 04:32:10.433483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:2784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:55.926 [2024-05-14 04:32:10.433493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:55.926 [2024-05-14 04:32:10.438542] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:55.926 [2024-05-14 04:32:10.438568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:18208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:55.926 [2024-05-14 04:32:10.438578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:55.926 [2024-05-14 04:32:10.443516] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:34:55.926 [2024-05-14 04:32:10.443541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:9440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:55.926 [2024-05-14 04:32:10.443551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:55.926 00:34:55.926 Latency(us) 00:34:55.926 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:55.926 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:34:55.926 nvme0n1 : 2.00 5653.35 706.67 0.00 0.00 2827.19 560.51 10623.73 00:34:55.926 =================================================================================================================== 00:34:55.926 Total : 5653.35 706.67 0.00 0.00 2827.19 560.51 10623.73 00:34:55.926 0 00:34:55.926 04:32:10 -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:34:55.926 04:32:10 -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:34:55.926 04:32:10 -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:34:55.926 | .driver_specific 00:34:55.926 | .nvme_error 00:34:55.926 | .status_code 00:34:55.926 | .command_transient_transport_error' 00:34:55.926 04:32:10 -- host/digest.sh@18 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:34:56.187 04:32:10 -- host/digest.sh@71 -- # (( 365 > 0 )) 00:34:56.187 04:32:10 -- host/digest.sh@73 -- # killprocess 60119 00:34:56.188 04:32:10 -- common/autotest_common.sh@926 -- # '[' -z 60119 ']' 00:34:56.188 04:32:10 -- common/autotest_common.sh@930 -- # kill -0 60119 00:34:56.188 04:32:10 -- common/autotest_common.sh@931 -- # uname 00:34:56.188 04:32:10 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:34:56.188 04:32:10 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 60119 00:34:56.188 04:32:10 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:34:56.188 04:32:10 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:34:56.188 04:32:10 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 60119' 00:34:56.188 killing process with pid 60119 00:34:56.188 04:32:10 -- common/autotest_common.sh@945 -- # kill 60119 00:34:56.188 Received shutdown signal, test time was about 2.000000 seconds 00:34:56.188 00:34:56.188 Latency(us) 00:34:56.188 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:56.188 =================================================================================================================== 00:34:56.188 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:34:56.188 04:32:10 -- common/autotest_common.sh@950 -- # wait 60119 00:34:56.448 04:32:11 -- host/digest.sh@113 -- # run_bperf_err randwrite 4096 128 00:34:56.449 04:32:11 -- host/digest.sh@54 -- # local rw bs qd 00:34:56.449 04:32:11 -- host/digest.sh@56 -- # rw=randwrite 00:34:56.449 04:32:11 -- host/digest.sh@56 -- # bs=4096 00:34:56.449 04:32:11 -- host/digest.sh@56 -- # qd=128 00:34:56.449 04:32:11 -- host/digest.sh@58 -- # bperfpid=61008 00:34:56.449 04:32:11 -- host/digest.sh@60 -- # waitforlisten 61008 /var/tmp/bperf.sock 00:34:56.449 04:32:11 -- common/autotest_common.sh@819 -- # '[' -z 61008 ']' 00:34:56.449 04:32:11 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bperf.sock 00:34:56.449 04:32:11 -- common/autotest_common.sh@824 -- # local max_retries=100 00:34:56.449 04:32:11 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:34:56.449 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:34:56.449 04:32:11 -- common/autotest_common.sh@828 -- # xtrace_disable 00:34:56.449 04:32:11 -- common/autotest_common.sh@10 -- # set +x 00:34:56.449 04:32:11 -- host/digest.sh@57 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:34:56.708 [2024-05-14 04:32:11.070194] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:34:56.708 [2024-05-14 04:32:11.070286] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61008 ] 00:34:56.708 EAL: No free 2048 kB hugepages reported on node 1 00:34:56.708 [2024-05-14 04:32:11.159171] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:56.708 [2024-05-14 04:32:11.249229] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:34:57.275 04:32:11 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:34:57.275 04:32:11 -- common/autotest_common.sh@852 -- # return 0 00:34:57.275 04:32:11 -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:34:57.275 04:32:11 -- host/digest.sh@18 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:34:57.533 04:32:11 -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:34:57.533 04:32:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:34:57.533 04:32:11 -- common/autotest_common.sh@10 -- # set +x 00:34:57.533 04:32:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:34:57.533 04:32:11 -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:34:57.533 04:32:11 -- host/digest.sh@18 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:34:57.791 nvme0n1 00:34:57.792 04:32:12 -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:34:57.792 04:32:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:34:57.792 04:32:12 -- common/autotest_common.sh@10 -- # set +x 00:34:57.792 04:32:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:34:57.792 04:32:12 -- host/digest.sh@69 -- # bperf_py perform_tests 00:34:57.792 04:32:12 -- host/digest.sh@19 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:34:57.792 Running I/O for 2 seconds... 00:34:57.792 [2024-05-14 04:32:12.222986] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f3a28 00:34:57.792 [2024-05-14 04:32:12.223754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:9426 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.792 [2024-05-14 04:32:12.223797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:34:57.792 [2024-05-14 04:32:12.231642] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fb480 00:34:57.792 [2024-05-14 04:32:12.232287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:6850 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.792 [2024-05-14 04:32:12.232317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:34:57.792 [2024-05-14 04:32:12.240664] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195e5658 00:34:57.792 [2024-05-14 04:32:12.241461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:16095 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.792 [2024-05-14 04:32:12.241486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:34:57.792 [2024-05-14 04:32:12.249740] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195e73e0 00:34:57.792 [2024-05-14 04:32:12.250347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:19608 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.792 [2024-05-14 04:32:12.250373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:34:57.792 [2024-05-14 04:32:12.258715] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195e73e0 00:34:57.792 [2024-05-14 04:32:12.259480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:276 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.792 [2024-05-14 04:32:12.259503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:34:57.792 [2024-05-14 04:32:12.267686] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195e73e0 00:34:57.792 [2024-05-14 04:32:12.268466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:9531 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.792 [2024-05-14 04:32:12.268491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:57.792 [2024-05-14 04:32:12.276538] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195e73e0 00:34:57.792 [2024-05-14 04:32:12.277320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:1096 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.792 [2024-05-14 04:32:12.277344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:57.792 [2024-05-14 04:32:12.285379] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195e73e0 00:34:57.792 [2024-05-14 04:32:12.286167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:4675 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.792 [2024-05-14 04:32:12.286202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:34:57.792 [2024-05-14 04:32:12.294218] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195e73e0 00:34:57.792 [2024-05-14 04:32:12.295012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:13521 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.792 [2024-05-14 04:32:12.295035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:34:57.792 [2024-05-14 04:32:12.303064] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195e73e0 00:34:57.792 [2024-05-14 04:32:12.303873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:11171 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.792 [2024-05-14 04:32:12.303898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:34:57.792 [2024-05-14 04:32:12.311912] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195e73e0 00:34:57.792 [2024-05-14 04:32:12.312729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:8444 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.792 [2024-05-14 04:32:12.312756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:34:57.792 [2024-05-14 04:32:12.320755] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195e73e0 00:34:57.792 [2024-05-14 04:32:12.321578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:10650 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.792 [2024-05-14 04:32:12.321601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:34:57.792 [2024-05-14 04:32:12.329591] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195e73e0 00:34:57.792 [2024-05-14 04:32:12.330422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:629 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.792 [2024-05-14 04:32:12.330445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:34:57.792 [2024-05-14 04:32:12.338421] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195e73e0 00:34:57.792 [2024-05-14 04:32:12.339269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:9634 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.792 [2024-05-14 04:32:12.339294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:34:57.792 [2024-05-14 04:32:12.347260] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195e73e0 00:34:57.792 [2024-05-14 04:32:12.348107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:16064 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.792 [2024-05-14 04:32:12.348129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:34:57.792 [2024-05-14 04:32:12.356090] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195e73e0 00:34:57.792 [2024-05-14 04:32:12.356958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:18013 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.792 [2024-05-14 04:32:12.356982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:34:57.792 [2024-05-14 04:32:12.364910] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195e73e0 00:34:57.792 [2024-05-14 04:32:12.365781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:23197 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.792 [2024-05-14 04:32:12.365804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:34:57.792 [2024-05-14 04:32:12.373732] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195e73e0 00:34:57.792 [2024-05-14 04:32:12.374610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:653 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.792 [2024-05-14 04:32:12.374634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:34:58.052 [2024-05-14 04:32:12.382557] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195e73e0 00:34:58.052 [2024-05-14 04:32:12.383444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:11398 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:58.052 [2024-05-14 04:32:12.383466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:34:58.053 [2024-05-14 04:32:12.391376] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195e73e0 00:34:58.053 [2024-05-14 04:32:12.392277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:7932 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:58.053 [2024-05-14 04:32:12.392299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:34:58.053 [2024-05-14 04:32:12.400194] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195e73e0 00:34:58.053 [2024-05-14 04:32:12.401097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:21011 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:58.053 [2024-05-14 04:32:12.401119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:34:58.053 [2024-05-14 04:32:12.409005] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195e73e0 00:34:58.053 [2024-05-14 04:32:12.409923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:1902 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:58.053 [2024-05-14 04:32:12.409946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:34:58.053 [2024-05-14 04:32:12.417824] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195efae0 00:34:58.053 [2024-05-14 04:32:12.418749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:3187 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:58.053 [2024-05-14 04:32:12.418772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:34:58.053 [2024-05-14 04:32:12.426641] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fb8b8 00:34:58.053 [2024-05-14 04:32:12.427576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:7828 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:58.053 [2024-05-14 04:32:12.427598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:34:58.053 [2024-05-14 04:32:12.435458] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195ecc78 00:34:58.053 [2024-05-14 04:32:12.436408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:3028 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:58.053 [2024-05-14 04:32:12.436437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:34:58.053 [2024-05-14 04:32:12.444333] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195e84c0 00:34:58.053 [2024-05-14 04:32:12.445297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:1274 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:58.053 [2024-05-14 04:32:12.445321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:34:58.053 [2024-05-14 04:32:12.453161] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195e84c0 00:34:58.053 [2024-05-14 04:32:12.454126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:21977 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:58.053 [2024-05-14 04:32:12.454148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:34:58.053 [2024-05-14 04:32:12.461979] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195ecc78 00:34:58.053 [2024-05-14 04:32:12.462949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:24311 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:58.053 [2024-05-14 04:32:12.462975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:34:58.053 [2024-05-14 04:32:12.470317] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fbcf0 00:34:58.053 [2024-05-14 04:32:12.471085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:11017 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:58.053 [2024-05-14 04:32:12.471108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:34:58.053 [2024-05-14 04:32:12.479317] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f0ff8 00:34:58.053 [2024-05-14 04:32:12.479896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:10650 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:58.053 [2024-05-14 04:32:12.479921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:34:58.053 [2024-05-14 04:32:12.488242] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f9b30 00:34:58.053 [2024-05-14 04:32:12.488989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:1852 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:58.053 [2024-05-14 04:32:12.489011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:34:58.053 [2024-05-14 04:32:12.497031] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f9b30 00:34:58.053 [2024-05-14 04:32:12.497781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:8867 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:58.053 [2024-05-14 04:32:12.497804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:34:58.053 [2024-05-14 04:32:12.505827] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f9b30 00:34:58.053 [2024-05-14 04:32:12.506586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:16655 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:58.053 [2024-05-14 04:32:12.506608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:34:58.053 [2024-05-14 04:32:12.514625] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f9b30 00:34:58.053 [2024-05-14 04:32:12.515394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:20327 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:58.053 [2024-05-14 04:32:12.515420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:58.053 [2024-05-14 04:32:12.523412] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f9b30 00:34:58.053 [2024-05-14 04:32:12.524191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:11344 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:58.053 [2024-05-14 04:32:12.524214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:58.053 [2024-05-14 04:32:12.532205] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f9b30 00:34:58.053 [2024-05-14 04:32:12.532990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:13000 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:58.053 [2024-05-14 04:32:12.533012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:34:58.053 [2024-05-14 04:32:12.541000] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f9b30 00:34:58.053 [2024-05-14 04:32:12.541804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:21361 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:58.053 [2024-05-14 04:32:12.541827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:34:58.053 [2024-05-14 04:32:12.549798] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f9b30 00:34:58.053 [2024-05-14 04:32:12.550605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:3620 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:58.053 [2024-05-14 04:32:12.550630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:34:58.053 [2024-05-14 04:32:12.558582] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f9b30 00:34:58.053 [2024-05-14 04:32:12.559404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:3064 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:58.053 [2024-05-14 04:32:12.559426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:34:58.053 [2024-05-14 04:32:12.567433] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f9b30 00:34:58.053 [2024-05-14 04:32:12.568258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:19781 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:58.053 [2024-05-14 04:32:12.568283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:34:58.053 [2024-05-14 04:32:12.576219] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f9b30 00:34:58.053 [2024-05-14 04:32:12.577049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:18924 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:58.053 [2024-05-14 04:32:12.577073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:34:58.053 [2024-05-14 04:32:12.584999] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f9b30 00:34:58.053 [2024-05-14 04:32:12.585843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:23885 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:58.053 [2024-05-14 04:32:12.585867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:34:58.053 [2024-05-14 04:32:12.593781] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f9b30 00:34:58.053 [2024-05-14 04:32:12.594633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:1608 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:58.053 [2024-05-14 04:32:12.594655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:34:58.053 [2024-05-14 04:32:12.602557] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f9b30 00:34:58.053 [2024-05-14 04:32:12.603415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:25259 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:58.053 [2024-05-14 04:32:12.603437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:34:58.053 [2024-05-14 04:32:12.611341] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f9b30 00:34:58.053 [2024-05-14 04:32:12.612211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:106 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:58.053 [2024-05-14 04:32:12.612234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:34:58.053 [2024-05-14 04:32:12.620122] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f9b30 00:34:58.053 [2024-05-14 04:32:12.621002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:19976 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:58.053 [2024-05-14 04:32:12.621026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:34:58.054 [2024-05-14 04:32:12.628897] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f9b30 00:34:58.054 [2024-05-14 04:32:12.629786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:13956 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:58.054 [2024-05-14 04:32:12.629808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:34:58.054 [2024-05-14 04:32:12.637689] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f9b30 00:34:58.054 [2024-05-14 04:32:12.638589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:2105 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:58.054 [2024-05-14 04:32:12.638612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:34:58.315 [2024-05-14 04:32:12.646491] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f9b30 00:34:58.315 [2024-05-14 04:32:12.647402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:20743 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:58.315 [2024-05-14 04:32:12.647426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:34:58.315 [2024-05-14 04:32:12.655312] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fa7d8 00:34:58.315 [2024-05-14 04:32:12.656235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:4396 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:58.315 [2024-05-14 04:32:12.656256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:34:58.315 [2024-05-14 04:32:12.664106] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f92c0 00:34:58.315 [2024-05-14 04:32:12.665033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:5405 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:58.315 [2024-05-14 04:32:12.665055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:34:58.315 [2024-05-14 04:32:12.672911] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195eff18 00:34:58.315 [2024-05-14 04:32:12.673847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:3549 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:58.315 [2024-05-14 04:32:12.673870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:34:58.315 [2024-05-14 04:32:12.681713] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195eff18 00:34:58.315 [2024-05-14 04:32:12.682661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:19982 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:58.315 [2024-05-14 04:32:12.682684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:34:58.315 [2024-05-14 04:32:12.690511] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f92c0 00:34:58.315 [2024-05-14 04:32:12.691478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:9536 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:58.315 [2024-05-14 04:32:12.691500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:34:58.315 [2024-05-14 04:32:12.698812] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195feb58 00:34:58.315 [2024-05-14 04:32:12.699565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:5561 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:58.315 [2024-05-14 04:32:12.699586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:34:58.315 [2024-05-14 04:32:12.707816] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195e5ec8 00:34:58.315 [2024-05-14 04:32:12.708383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:23607 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:58.315 [2024-05-14 04:32:12.708408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:34:58.315 [2024-05-14 04:32:12.716745] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f4b08 00:34:58.315 [2024-05-14 04:32:12.717475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:23548 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:58.315 [2024-05-14 04:32:12.717498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:34:58.315 [2024-05-14 04:32:12.725553] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fc560 00:34:58.315 [2024-05-14 04:32:12.726289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21629 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:58.315 [2024-05-14 04:32:12.726312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:34:58.315 [2024-05-14 04:32:12.734373] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fc560 00:34:58.315 [2024-05-14 04:32:12.735114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16682 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:58.315 [2024-05-14 04:32:12.735136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:34:58.315 [2024-05-14 04:32:12.743192] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fc560 00:34:58.315 [2024-05-14 04:32:12.743946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:7832 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:58.315 [2024-05-14 04:32:12.743969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:34:58.315 [2024-05-14 04:32:12.751991] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fc560 00:34:58.315 [2024-05-14 04:32:12.752753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:12155 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:58.315 [2024-05-14 04:32:12.752776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:34:58.315 [2024-05-14 04:32:12.760790] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fc560 00:34:58.315 [2024-05-14 04:32:12.761562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:20170 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:58.315 [2024-05-14 04:32:12.761584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:58.315 [2024-05-14 04:32:12.769599] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fc560 00:34:58.315 [2024-05-14 04:32:12.770382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:11126 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:58.315 [2024-05-14 04:32:12.770403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:58.316 [2024-05-14 04:32:12.778405] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fc560 00:34:58.316 [2024-05-14 04:32:12.779197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:19711 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:58.316 [2024-05-14 04:32:12.779221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:34:58.316 [2024-05-14 04:32:12.787210] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fc560 00:34:58.316 [2024-05-14 04:32:12.788007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:14183 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:58.316 [2024-05-14 04:32:12.788030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:34:58.316 [2024-05-14 04:32:12.795998] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fc560 00:34:58.316 [2024-05-14 04:32:12.796807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:4030 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:58.316 [2024-05-14 04:32:12.796828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:34:58.316 [2024-05-14 04:32:12.804791] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fc560 00:34:58.316 [2024-05-14 04:32:12.805607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:19917 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:58.316 [2024-05-14 04:32:12.805629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:34:58.316 [2024-05-14 04:32:12.813578] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fc560 00:34:58.316 [2024-05-14 04:32:12.814408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:6682 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:58.316 [2024-05-14 04:32:12.814430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:34:58.316 [2024-05-14 04:32:12.822366] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fc560 00:34:58.316 [2024-05-14 04:32:12.823201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:7883 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:58.316 [2024-05-14 04:32:12.823223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:34:58.316 [2024-05-14 04:32:12.831166] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fc560 00:34:58.316 [2024-05-14 04:32:12.832014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:11425 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:58.316 [2024-05-14 04:32:12.832036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:34:58.316 [2024-05-14 04:32:12.839958] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fc560 00:34:58.316 [2024-05-14 04:32:12.840812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:1173 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:58.316 [2024-05-14 04:32:12.840838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:34:58.316 [2024-05-14 04:32:12.848760] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fc560 00:34:58.316 [2024-05-14 04:32:12.849627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:23883 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:58.316 [2024-05-14 04:32:12.849650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:34:58.316 [2024-05-14 04:32:12.857586] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fc560 00:34:58.316 [2024-05-14 04:32:12.858464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:8931 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:58.316 [2024-05-14 04:32:12.858485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:34:58.316 [2024-05-14 04:32:12.866382] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fc560 00:34:58.316 [2024-05-14 04:32:12.867266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:18026 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:58.316 [2024-05-14 04:32:12.867288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:34:58.316 [2024-05-14 04:32:12.875177] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fc560 00:34:58.316 [2024-05-14 04:32:12.876070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:10592 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:58.316 [2024-05-14 04:32:12.876096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:34:58.316 [2024-05-14 04:32:12.884005] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fc560 00:34:58.316 [2024-05-14 04:32:12.884908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:12447 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:58.316 [2024-05-14 04:32:12.884931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:34:58.316 [2024-05-14 04:32:12.892803] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195ebb98 00:34:58.316 [2024-05-14 04:32:12.893714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:15642 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:58.316 [2024-05-14 04:32:12.893737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:34:58.578 [2024-05-14 04:32:12.901608] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f4b08 00:34:58.578 [2024-05-14 04:32:12.902534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:12225 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:58.578 [2024-05-14 04:32:12.902557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:34:58.578 [2024-05-14 04:32:12.910421] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f4b08 00:34:58.578 [2024-05-14 04:32:12.911355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:23794 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:58.578 [2024-05-14 04:32:12.911376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:34:58.578 [2024-05-14 04:32:12.919225] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195ebb98 00:34:58.578 [2024-05-14 04:32:12.920166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:19158 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:58.578 [2024-05-14 04:32:12.920191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:34:58.578 [2024-05-14 04:32:12.927520] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195e5ec8 00:34:58.578 [2024-05-14 04:32:12.928258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:4364 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:58.578 [2024-05-14 04:32:12.928280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:34:58.578 [2024-05-14 04:32:12.936486] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f2510 00:34:58.578 [2024-05-14 04:32:12.937037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:5885 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:58.578 [2024-05-14 04:32:12.937061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:34:58.578 [2024-05-14 04:32:12.945421] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195ff3c8 00:34:58.578 [2024-05-14 04:32:12.946132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:19775 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:58.578 [2024-05-14 04:32:12.946156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:34:58.578 [2024-05-14 04:32:12.954228] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f7970 00:34:58.578 [2024-05-14 04:32:12.954944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:18785 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:58.578 [2024-05-14 04:32:12.954968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:34:58.578 [2024-05-14 04:32:12.963034] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f7970 00:34:58.578 [2024-05-14 04:32:12.963762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:16005 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:58.578 [2024-05-14 04:32:12.963785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:34:58.578 [2024-05-14 04:32:12.971847] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f7970 00:34:58.578 [2024-05-14 04:32:12.972591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:18470 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:58.578 [2024-05-14 04:32:12.972613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:34:58.578 [2024-05-14 04:32:12.980653] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f7970 00:34:58.578 [2024-05-14 04:32:12.981400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:5994 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:58.578 [2024-05-14 04:32:12.981422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:34:58.578 [2024-05-14 04:32:12.989448] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f7970 00:34:58.578 [2024-05-14 04:32:12.990206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:4702 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:58.578 [2024-05-14 04:32:12.990235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:34:58.578 [2024-05-14 04:32:12.998256] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f7970 00:34:58.578 [2024-05-14 04:32:12.999016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:6016 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:58.578 [2024-05-14 04:32:12.999046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:34:58.578 [2024-05-14 04:32:13.007063] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f7970 00:34:58.578 [2024-05-14 04:32:13.007840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:17140 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:58.578 [2024-05-14 04:32:13.007862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:58.578 [2024-05-14 04:32:13.015906] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f7970 00:34:58.578 [2024-05-14 04:32:13.016694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:24008 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:58.578 [2024-05-14 04:32:13.016716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:58.578 [2024-05-14 04:32:13.024710] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f7970 00:34:58.578 [2024-05-14 04:32:13.025507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:11934 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:58.578 [2024-05-14 04:32:13.025531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:34:58.578 [2024-05-14 04:32:13.033518] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f7970 00:34:58.578 [2024-05-14 04:32:13.034437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:17959 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:58.578 [2024-05-14 04:32:13.034462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:34:58.578 [2024-05-14 04:32:13.042334] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f7970 00:34:58.578 [2024-05-14 04:32:13.043147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:20685 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:58.578 [2024-05-14 04:32:13.043169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:34:58.578 [2024-05-14 04:32:13.051126] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f7970 00:34:58.578 [2024-05-14 04:32:13.051949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:4570 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:58.578 [2024-05-14 04:32:13.051971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:34:58.578 [2024-05-14 04:32:13.059918] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f7970 00:34:58.578 [2024-05-14 04:32:13.060746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:14238 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:58.578 [2024-05-14 04:32:13.060769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:34:58.578 [2024-05-14 04:32:13.068748] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f7970 00:34:58.578 [2024-05-14 04:32:13.069598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:15065 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:58.578 [2024-05-14 04:32:13.069620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:34:58.578 [2024-05-14 04:32:13.077575] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f7970 00:34:58.578 [2024-05-14 04:32:13.078426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:713 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:58.578 [2024-05-14 04:32:13.078448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:34:58.578 [2024-05-14 04:32:13.086382] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f7970 00:34:58.578 [2024-05-14 04:32:13.087238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:2644 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:58.578 [2024-05-14 04:32:13.087260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:34:58.578 [2024-05-14 04:32:13.095190] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f7970 00:34:58.578 [2024-05-14 04:32:13.096053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:24976 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:58.578 [2024-05-14 04:32:13.096077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:34:58.578 [2024-05-14 04:32:13.104006] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f7970 00:34:58.578 [2024-05-14 04:32:13.104884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:5662 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:58.578 [2024-05-14 04:32:13.104908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:34:58.578 [2024-05-14 04:32:13.112803] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f7970 00:34:58.578 [2024-05-14 04:32:13.113688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:19376 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:58.578 [2024-05-14 04:32:13.113710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:34:58.579 [2024-05-14 04:32:13.121607] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f7970 00:34:58.579 [2024-05-14 04:32:13.122514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:17692 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:58.579 [2024-05-14 04:32:13.122536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:34:58.579 [2024-05-14 04:32:13.130611] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f2948 00:34:58.579 [2024-05-14 04:32:13.131616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:5458 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:58.579 [2024-05-14 04:32:13.131641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:34:58.579 [2024-05-14 04:32:13.141178] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f2948 00:34:58.579 [2024-05-14 04:32:13.142223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:25268 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:58.579 [2024-05-14 04:32:13.142247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:34:58.579 [2024-05-14 04:32:13.150570] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f7970 00:34:58.579 [2024-05-14 04:32:13.151502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:25082 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:58.579 [2024-05-14 04:32:13.151528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:34:58.579 [2024-05-14 04:32:13.158900] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f7970 00:34:58.579 [2024-05-14 04:32:13.159629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:322 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:58.579 [2024-05-14 04:32:13.159654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:34:58.840 [2024-05-14 04:32:13.167934] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195ebb98 00:34:58.840 [2024-05-14 04:32:13.168480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:24181 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:58.840 [2024-05-14 04:32:13.168503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:34:58.840 [2024-05-14 04:32:13.176891] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195ec408 00:34:58.840 [2024-05-14 04:32:13.177587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:22405 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:58.840 [2024-05-14 04:32:13.177611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:34:58.840 [2024-05-14 04:32:13.185709] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fac10 00:34:58.840 [2024-05-14 04:32:13.186418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:14797 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:58.840 [2024-05-14 04:32:13.186440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:34:58.840 [2024-05-14 04:32:13.194550] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fac10 00:34:58.840 [2024-05-14 04:32:13.195267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:5388 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:58.840 [2024-05-14 04:32:13.195293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:34:58.840 [2024-05-14 04:32:13.203406] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195ec408 00:34:58.840 [2024-05-14 04:32:13.204128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:4407 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:58.840 [2024-05-14 04:32:13.204152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:34:58.840 [2024-05-14 04:32:13.212247] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195ec408 00:34:58.840 [2024-05-14 04:32:13.212978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:1452 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:58.840 [2024-05-14 04:32:13.213000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:34:58.840 [2024-05-14 04:32:13.221071] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195ec408 00:34:58.840 [2024-05-14 04:32:13.221812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:8962 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:58.840 [2024-05-14 04:32:13.221840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:34:58.840 [2024-05-14 04:32:13.229899] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195ec408 00:34:58.840 [2024-05-14 04:32:13.230647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:4076 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:58.840 [2024-05-14 04:32:13.230670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:34:58.840 [2024-05-14 04:32:13.238724] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195ec408 00:34:58.840 [2024-05-14 04:32:13.239493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:19592 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:58.840 [2024-05-14 04:32:13.239515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:34:58.840 [2024-05-14 04:32:13.247564] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195ec408 00:34:58.840 [2024-05-14 04:32:13.248352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:24182 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:58.840 [2024-05-14 04:32:13.248375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:34:58.840 [2024-05-14 04:32:13.257644] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195ec408 00:34:58.840 [2024-05-14 04:32:13.258617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:5919 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:58.840 [2024-05-14 04:32:13.258644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:58.840 [2024-05-14 04:32:13.267880] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195ec408 00:34:58.840 [2024-05-14 04:32:13.268668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:23687 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:58.840 [2024-05-14 04:32:13.268691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:58.840 [2024-05-14 04:32:13.276742] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195ec408 00:34:58.840 [2024-05-14 04:32:13.277544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:5128 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:58.840 [2024-05-14 04:32:13.277569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:34:58.840 [2024-05-14 04:32:13.285563] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195ec408 00:34:58.840 [2024-05-14 04:32:13.286385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:19806 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:58.840 [2024-05-14 04:32:13.286409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:34:58.840 [2024-05-14 04:32:13.294388] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195ec408 00:34:58.840 [2024-05-14 04:32:13.295190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:20317 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:58.840 [2024-05-14 04:32:13.295214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:34:58.840 [2024-05-14 04:32:13.303204] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195ec408 00:34:58.840 [2024-05-14 04:32:13.304011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:3759 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:58.840 [2024-05-14 04:32:13.304035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:34:58.840 [2024-05-14 04:32:13.312007] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195ec408 00:34:58.840 [2024-05-14 04:32:13.312823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:22022 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:58.840 [2024-05-14 04:32:13.312846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:34:58.840 [2024-05-14 04:32:13.320820] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195ec408 00:34:58.841 [2024-05-14 04:32:13.321646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:15888 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:58.841 [2024-05-14 04:32:13.321668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:34:58.841 [2024-05-14 04:32:13.329630] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195ec408 00:34:58.841 [2024-05-14 04:32:13.330470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:9890 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:58.841 [2024-05-14 04:32:13.330493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:34:58.841 [2024-05-14 04:32:13.338459] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195ec408 00:34:58.841 [2024-05-14 04:32:13.339308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:2409 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:58.841 [2024-05-14 04:32:13.339330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:34:58.841 [2024-05-14 04:32:13.347286] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195ec408 00:34:58.841 [2024-05-14 04:32:13.348138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:20865 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:58.841 [2024-05-14 04:32:13.348160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:34:58.841 [2024-05-14 04:32:13.356095] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195ec408 00:34:58.841 [2024-05-14 04:32:13.356968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:614 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:58.841 [2024-05-14 04:32:13.356992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:34:58.841 [2024-05-14 04:32:13.364942] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195ec408 00:34:58.841 [2024-05-14 04:32:13.365820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:2951 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:58.841 [2024-05-14 04:32:13.365843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:34:58.841 [2024-05-14 04:32:13.373795] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195ec408 00:34:58.841 [2024-05-14 04:32:13.374681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23783 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:58.841 [2024-05-14 04:32:13.374708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:34:58.841 [2024-05-14 04:32:13.382652] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195e5ec8 00:34:58.841 [2024-05-14 04:32:13.383547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5161 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:58.841 [2024-05-14 04:32:13.383571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:34:58.841 [2024-05-14 04:32:13.390995] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195ebb98 00:34:58.841 [2024-05-14 04:32:13.391700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:20589 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:58.841 [2024-05-14 04:32:13.391722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:34:58.841 [2024-05-14 04:32:13.400001] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195ef270 00:34:58.841 [2024-05-14 04:32:13.400514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:10835 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:58.841 [2024-05-14 04:32:13.400536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:34:58.841 [2024-05-14 04:32:13.409048] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f7538 00:34:58.841 [2024-05-14 04:32:13.409354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:23693 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:58.841 [2024-05-14 04:32:13.409377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:34:58.841 [2024-05-14 04:32:13.417710] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195e73e0 00:34:58.841 [2024-05-14 04:32:13.418588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:22622 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:58.841 [2024-05-14 04:32:13.418611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:34:59.107 [2024-05-14 04:32:13.426497] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195e73e0 00:34:59.108 [2024-05-14 04:32:13.427361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:7264 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:59.108 [2024-05-14 04:32:13.427383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:34:59.108 [2024-05-14 04:32:13.435313] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195e73e0 00:34:59.108 [2024-05-14 04:32:13.436171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:1789 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:59.108 [2024-05-14 04:32:13.436197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:34:59.108 [2024-05-14 04:32:13.444123] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195e73e0 00:34:59.108 [2024-05-14 04:32:13.445000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:15600 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:59.108 [2024-05-14 04:32:13.445026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:34:59.108 [2024-05-14 04:32:13.452978] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195e3498 00:34:59.108 [2024-05-14 04:32:13.453872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:23343 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:59.108 [2024-05-14 04:32:13.453897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:34:59.108 [2024-05-14 04:32:13.463191] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195e4578 00:34:59.108 [2024-05-14 04:32:13.464349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:11946 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:59.108 [2024-05-14 04:32:13.464376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:34:59.108 [2024-05-14 04:32:13.473122] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fac10 00:34:59.108 [2024-05-14 04:32:13.474029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:17197 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:59.108 [2024-05-14 04:32:13.474053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:34:59.108 [2024-05-14 04:32:13.482114] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f8a50 00:34:59.108 [2024-05-14 04:32:13.482907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:4560 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:59.108 [2024-05-14 04:32:13.482930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:34:59.108 [2024-05-14 04:32:13.490959] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f0788 00:34:59.108 [2024-05-14 04:32:13.491762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:11867 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:59.108 [2024-05-14 04:32:13.491784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:34:59.108 [2024-05-14 04:32:13.499796] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fc560 00:34:59.108 [2024-05-14 04:32:13.500610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:3559 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:59.108 [2024-05-14 04:32:13.500632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:34:59.108 [2024-05-14 04:32:13.508624] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195e4140 00:34:59.108 [2024-05-14 04:32:13.509440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:8250 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:59.108 [2024-05-14 04:32:13.509466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:34:59.108 [2024-05-14 04:32:13.517467] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fc998 00:34:59.108 [2024-05-14 04:32:13.518298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:7204 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:59.108 [2024-05-14 04:32:13.518321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:34:59.108 [2024-05-14 04:32:13.526314] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fda78 00:34:59.108 [2024-05-14 04:32:13.527149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:25170 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:59.108 [2024-05-14 04:32:13.527172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:34:59.108 [2024-05-14 04:32:13.535148] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195ebb98 00:34:59.108 [2024-05-14 04:32:13.536002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:18658 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:59.108 [2024-05-14 04:32:13.536026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:34:59.108 [2024-05-14 04:32:13.544234] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fb8b8 00:34:59.108 [2024-05-14 04:32:13.544779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:15660 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:59.108 [2024-05-14 04:32:13.544805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:34:59.108 [2024-05-14 04:32:13.553031] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195efae0 00:34:59.108 [2024-05-14 04:32:13.553391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:10254 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:59.108 [2024-05-14 04:32:13.553415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:34:59.108 [2024-05-14 04:32:13.561836] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f7538 00:34:59.108 [2024-05-14 04:32:13.562161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:2101 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:59.108 [2024-05-14 04:32:13.562190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:34:59.108 [2024-05-14 04:32:13.570646] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f5378 00:34:59.108 [2024-05-14 04:32:13.570947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:7320 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:59.108 [2024-05-14 04:32:13.570977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:34:59.108 [2024-05-14 04:32:13.579445] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195e5ec8 00:34:59.108 [2024-05-14 04:32:13.579724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:20666 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:59.108 [2024-05-14 04:32:13.579746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:34:59.108 [2024-05-14 04:32:13.588328] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195e5a90 00:34:59.108 [2024-05-14 04:32:13.588581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:8930 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:59.109 [2024-05-14 04:32:13.588606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:34:59.109 [2024-05-14 04:32:13.597136] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fef90 00:34:59.109 [2024-05-14 04:32:13.597372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:4049 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:59.109 [2024-05-14 04:32:13.597396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:34:59.109 [2024-05-14 04:32:13.606050] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f6458 00:34:59.109 [2024-05-14 04:32:13.606258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:22067 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:59.109 [2024-05-14 04:32:13.606287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:34:59.109 [2024-05-14 04:32:13.614863] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f35f0 00:34:59.109 [2024-05-14 04:32:13.615041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:3915 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:59.109 [2024-05-14 04:32:13.615065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:34:59.109 [2024-05-14 04:32:13.623664] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195e4140 00:34:59.109 [2024-05-14 04:32:13.623813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:1315 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:59.109 [2024-05-14 04:32:13.623836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:34:59.109 [2024-05-14 04:32:13.634047] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195ebb98 00:34:59.109 [2024-05-14 04:32:13.635318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:15782 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:59.109 [2024-05-14 04:32:13.635342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:59.109 [2024-05-14 04:32:13.642882] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fdeb0 00:34:59.109 [2024-05-14 04:32:13.644169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:12558 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:59.109 [2024-05-14 04:32:13.644204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:59.109 [2024-05-14 04:32:13.651724] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f35f0 00:34:59.109 [2024-05-14 04:32:13.653015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:15608 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:59.109 [2024-05-14 04:32:13.653038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:34:59.109 [2024-05-14 04:32:13.659579] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195e4140 00:34:59.109 [2024-05-14 04:32:13.660406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:6277 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:59.109 [2024-05-14 04:32:13.660434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:34:59.109 [2024-05-14 04:32:13.668298] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195e4140 00:34:59.109 [2024-05-14 04:32:13.669261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:1253 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:59.109 [2024-05-14 04:32:13.669284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:34:59.109 [2024-05-14 04:32:13.677105] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195e4140 00:34:59.109 [2024-05-14 04:32:13.678134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:21687 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:59.109 [2024-05-14 04:32:13.678158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:59.109 [2024-05-14 04:32:13.685909] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195e7818 00:34:59.109 [2024-05-14 04:32:13.687020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:3976 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:59.109 [2024-05-14 04:32:13.687043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:59.370 [2024-05-14 04:32:13.694151] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f92c0 00:34:59.370 [2024-05-14 04:32:13.694955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:17898 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:59.370 [2024-05-14 04:32:13.694977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:34:59.370 [2024-05-14 04:32:13.702708] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195ee190 00:34:59.370 [2024-05-14 04:32:13.702938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:4459 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:59.370 [2024-05-14 04:32:13.702961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:34:59.370 [2024-05-14 04:32:13.711657] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195eff18 00:34:59.370 [2024-05-14 04:32:13.712227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:5863 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:59.370 [2024-05-14 04:32:13.712254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:34:59.370 [2024-05-14 04:32:13.720471] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195ee190 00:34:59.370 [2024-05-14 04:32:13.721045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:9330 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:59.370 [2024-05-14 04:32:13.721070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:34:59.370 [2024-05-14 04:32:13.729453] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f0ff8 00:34:59.370 [2024-05-14 04:32:13.729853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:3944 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:59.370 [2024-05-14 04:32:13.729877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:34:59.370 [2024-05-14 04:32:13.740133] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195e6738 00:34:59.371 [2024-05-14 04:32:13.741316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:9675 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:59.371 [2024-05-14 04:32:13.741341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:59.371 [2024-05-14 04:32:13.748535] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195e5220 00:34:59.371 [2024-05-14 04:32:13.749241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:10635 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:59.371 [2024-05-14 04:32:13.749265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:34:59.371 [2024-05-14 04:32:13.757467] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195eaab8 00:34:59.371 [2024-05-14 04:32:13.758516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:4830 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:59.371 [2024-05-14 04:32:13.758549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:34:59.371 [2024-05-14 04:32:13.766277] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fb8b8 00:34:59.371 [2024-05-14 04:32:13.767334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:17984 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:59.371 [2024-05-14 04:32:13.767357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:34:59.371 [2024-05-14 04:32:13.775060] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195eaab8 00:34:59.371 [2024-05-14 04:32:13.776115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:20802 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:59.371 [2024-05-14 04:32:13.776142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:34:59.371 [2024-05-14 04:32:13.784039] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195eea00 00:34:59.371 [2024-05-14 04:32:13.784922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:4285 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:59.371 [2024-05-14 04:32:13.784946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:34:59.371 [2024-05-14 04:32:13.791283] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195ea248 00:34:59.371 [2024-05-14 04:32:13.792086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:3492 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:59.371 [2024-05-14 04:32:13.792109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:34:59.371 [2024-05-14 04:32:13.800614] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f96f8 00:34:59.371 [2024-05-14 04:32:13.801904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:13544 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:59.371 [2024-05-14 04:32:13.801926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:34:59.371 [2024-05-14 04:32:13.810217] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195e5ec8 00:34:59.371 [2024-05-14 04:32:13.810805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:24000 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:59.371 [2024-05-14 04:32:13.810829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:59.371 [2024-05-14 04:32:13.819001] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195ee190 00:34:59.371 [2024-05-14 04:32:13.819605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:21705 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:59.371 [2024-05-14 04:32:13.819630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:34:59.371 [2024-05-14 04:32:13.827791] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f4f40 00:34:59.371 [2024-05-14 04:32:13.828369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:22859 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:59.371 [2024-05-14 04:32:13.828392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:34:59.371 [2024-05-14 04:32:13.836578] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195ec408 00:34:59.371 [2024-05-14 04:32:13.837468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:24833 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:59.371 [2024-05-14 04:32:13.837491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:34:59.371 [2024-05-14 04:32:13.845192] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f3a28 00:34:59.371 [2024-05-14 04:32:13.846458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:20964 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:59.371 [2024-05-14 04:32:13.846481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:34:59.371 [2024-05-14 04:32:13.854651] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f0788 00:34:59.371 [2024-05-14 04:32:13.855841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:44 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:59.371 [2024-05-14 04:32:13.855864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:34:59.371 [2024-05-14 04:32:13.863439] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195ee190 00:34:59.371 [2024-05-14 04:32:13.864617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:3593 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:59.371 [2024-05-14 04:32:13.864641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:34:59.371 [2024-05-14 04:32:13.872255] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195e73e0 00:34:59.371 [2024-05-14 04:32:13.873437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:20832 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:59.371 [2024-05-14 04:32:13.873463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:34:59.371 [2024-05-14 04:32:13.881056] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195e38d0 00:34:59.371 [2024-05-14 04:32:13.882253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:7934 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:59.371 [2024-05-14 04:32:13.882277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:34:59.371 [2024-05-14 04:32:13.889872] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195e88f8 00:34:59.371 [2024-05-14 04:32:13.891073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:15761 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:59.371 [2024-05-14 04:32:13.891100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:34:59.371 [2024-05-14 04:32:13.898726] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f3a28 00:34:59.371 [2024-05-14 04:32:13.899946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:3527 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:59.371 [2024-05-14 04:32:13.899969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:34:59.371 [2024-05-14 04:32:13.907385] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195ef270 00:34:59.371 [2024-05-14 04:32:13.908079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:21512 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:59.371 [2024-05-14 04:32:13.908102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:34:59.371 [2024-05-14 04:32:13.916280] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fc998 00:34:59.371 [2024-05-14 04:32:13.917313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:5902 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:59.371 [2024-05-14 04:32:13.917335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:34:59.371 [2024-05-14 04:32:13.925057] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195e9e10 00:34:59.371 [2024-05-14 04:32:13.926081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:24018 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:59.371 [2024-05-14 04:32:13.926103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:34:59.371 [2024-05-14 04:32:13.934045] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195e6300 00:34:59.371 [2024-05-14 04:32:13.934887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3613 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:59.371 [2024-05-14 04:32:13.934910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:34:59.371 [2024-05-14 04:32:13.942390] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f0788 00:34:59.371 [2024-05-14 04:32:13.943712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:7213 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:59.371 [2024-05-14 04:32:13.943734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:34:59.371 [2024-05-14 04:32:13.951391] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fc560 00:34:59.371 [2024-05-14 04:32:13.952335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:20163 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:59.372 [2024-05-14 04:32:13.952358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:34:59.630 [2024-05-14 04:32:13.959824] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195ebfd0 00:34:59.630 [2024-05-14 04:32:13.960705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:1531 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:59.630 [2024-05-14 04:32:13.960729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:34:59.630 [2024-05-14 04:32:13.968534] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195e73e0 00:34:59.630 [2024-05-14 04:32:13.969419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:13905 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:59.630 [2024-05-14 04:32:13.969441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:34:59.630 [2024-05-14 04:32:13.977324] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fbcf0 00:34:59.630 [2024-05-14 04:32:13.978120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:21554 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:59.630 [2024-05-14 04:32:13.978143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:34:59.630 [2024-05-14 04:32:13.986134] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f8a50 00:34:59.630 [2024-05-14 04:32:13.986984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:17277 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:59.630 [2024-05-14 04:32:13.987012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:34:59.630 [2024-05-14 04:32:13.994918] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f4b08 00:34:59.630 [2024-05-14 04:32:13.995804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:13641 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:59.630 [2024-05-14 04:32:13.995827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:34:59.630 [2024-05-14 04:32:14.003706] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195ea680 00:34:59.630 [2024-05-14 04:32:14.004660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:9564 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:59.630 [2024-05-14 04:32:14.004683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:34:59.630 [2024-05-14 04:32:14.012501] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195e23b8 00:34:59.630 [2024-05-14 04:32:14.013523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:15275 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:59.630 [2024-05-14 04:32:14.013546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:59.631 [2024-05-14 04:32:14.020854] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f7970 00:34:59.631 [2024-05-14 04:32:14.021264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:15690 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:59.631 [2024-05-14 04:32:14.021287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:34:59.631 [2024-05-14 04:32:14.031011] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f2948 00:34:59.631 [2024-05-14 04:32:14.032218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:10021 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:59.631 [2024-05-14 04:32:14.032241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:59.631 [2024-05-14 04:32:14.039822] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fd208 00:34:59.631 [2024-05-14 04:32:14.041043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:11236 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:59.631 [2024-05-14 04:32:14.041068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:59.631 [2024-05-14 04:32:14.048625] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fb480 00:34:59.631 [2024-05-14 04:32:14.049847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:347 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:59.631 [2024-05-14 04:32:14.049868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:34:59.631 [2024-05-14 04:32:14.057243] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fef90 00:34:59.631 [2024-05-14 04:32:14.058203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:13548 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:59.631 [2024-05-14 04:32:14.058227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:59.631 [2024-05-14 04:32:14.065414] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fd208 00:34:59.631 [2024-05-14 04:32:14.065798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:18734 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:59.631 [2024-05-14 04:32:14.065821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:34:59.631 [2024-05-14 04:32:14.074460] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195ed0b0 00:34:59.631 [2024-05-14 04:32:14.075345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:9793 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:59.631 [2024-05-14 04:32:14.075369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:34:59.631 [2024-05-14 04:32:14.082864] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f7538 00:34:59.631 [2024-05-14 04:32:14.083303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:7478 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:59.631 [2024-05-14 04:32:14.083329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:34:59.631 [2024-05-14 04:32:14.091785] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195e5ec8 00:34:59.631 [2024-05-14 04:32:14.092566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:25040 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:59.631 [2024-05-14 04:32:14.092590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:59.631 [2024-05-14 04:32:14.100554] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f57b0 00:34:59.631 [2024-05-14 04:32:14.101327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:16684 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:59.631 [2024-05-14 04:32:14.101350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:34:59.631 [2024-05-14 04:32:14.109559] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f2510 00:34:59.631 [2024-05-14 04:32:14.110157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:2684 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:59.631 [2024-05-14 04:32:14.110180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:34:59.631 [2024-05-14 04:32:14.118731] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fbcf0 00:34:59.631 [2024-05-14 04:32:14.119321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:6750 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:59.631 [2024-05-14 04:32:14.119343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:34:59.631 [2024-05-14 04:32:14.128785] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f4298 00:34:59.631 [2024-05-14 04:32:14.130071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:13335 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:59.631 [2024-05-14 04:32:14.130095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:59.631 [2024-05-14 04:32:14.137567] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195ec840 00:34:59.631 [2024-05-14 04:32:14.138851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:8988 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:59.631 [2024-05-14 04:32:14.138885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:59.631 [2024-05-14 04:32:14.145188] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195de470 00:34:59.631 [2024-05-14 04:32:14.146081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:23115 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:59.631 [2024-05-14 04:32:14.146103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:34:59.631 [2024-05-14 04:32:14.153608] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f4298 00:34:59.631 [2024-05-14 04:32:14.153782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:2345 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:59.631 [2024-05-14 04:32:14.153805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:34:59.631 [2024-05-14 04:32:14.162677] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195e6738 00:34:59.631 [2024-05-14 04:32:14.163326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:13968 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:59.631 [2024-05-14 04:32:14.163350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:34:59.631 [2024-05-14 04:32:14.171467] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195dece0 00:34:59.631 [2024-05-14 04:32:14.172457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:24362 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:59.631 [2024-05-14 04:32:14.172482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:34:59.631 [2024-05-14 04:32:14.180851] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fe2e8 00:34:59.631 [2024-05-14 04:32:14.181903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:15638 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:59.631 [2024-05-14 04:32:14.181926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:34:59.631 [2024-05-14 04:32:14.189823] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195e0ea0 00:34:59.631 [2024-05-14 04:32:14.190560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:11544 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:59.631 [2024-05-14 04:32:14.190583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:34:59.631 [2024-05-14 04:32:14.198600] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195eaef0 00:34:59.631 [2024-05-14 04:32:14.199139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:8378 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:59.631 [2024-05-14 04:32:14.199161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:34:59.631 [2024-05-14 04:32:14.207374] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195ff3c8 00:34:59.631 [2024-05-14 04:32:14.207886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:9136 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:59.631 [2024-05-14 04:32:14.207912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:34:59.631 00:34:59.631 Latency(us) 00:34:59.631 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:59.631 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:34:59.631 nvme0n1 : 2.00 28700.05 112.11 0.00 0.00 4456.43 2164.41 12900.24 00:34:59.631 =================================================================================================================== 00:34:59.631 Total : 28700.05 112.11 0.00 0.00 4456.43 2164.41 12900.24 00:34:59.631 0 00:34:59.890 04:32:14 -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:34:59.890 04:32:14 -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:34:59.890 | .driver_specific 00:34:59.890 | .nvme_error 00:34:59.890 | .status_code 00:34:59.890 | .command_transient_transport_error' 00:34:59.890 04:32:14 -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:34:59.890 04:32:14 -- host/digest.sh@18 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:34:59.890 04:32:14 -- host/digest.sh@71 -- # (( 225 > 0 )) 00:34:59.890 04:32:14 -- host/digest.sh@73 -- # killprocess 61008 00:34:59.890 04:32:14 -- common/autotest_common.sh@926 -- # '[' -z 61008 ']' 00:34:59.890 04:32:14 -- common/autotest_common.sh@930 -- # kill -0 61008 00:34:59.890 04:32:14 -- common/autotest_common.sh@931 -- # uname 00:34:59.890 04:32:14 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:34:59.890 04:32:14 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 61008 00:34:59.890 04:32:14 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:34:59.890 04:32:14 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:34:59.890 04:32:14 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 61008' 00:34:59.890 killing process with pid 61008 00:34:59.890 04:32:14 -- common/autotest_common.sh@945 -- # kill 61008 00:34:59.890 Received shutdown signal, test time was about 2.000000 seconds 00:34:59.890 00:34:59.890 Latency(us) 00:34:59.890 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:59.890 =================================================================================================================== 00:34:59.890 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:34:59.890 04:32:14 -- common/autotest_common.sh@950 -- # wait 61008 00:35:00.457 04:32:14 -- host/digest.sh@114 -- # run_bperf_err randwrite 131072 16 00:35:00.457 04:32:14 -- host/digest.sh@54 -- # local rw bs qd 00:35:00.457 04:32:14 -- host/digest.sh@56 -- # rw=randwrite 00:35:00.458 04:32:14 -- host/digest.sh@56 -- # bs=131072 00:35:00.458 04:32:14 -- host/digest.sh@56 -- # qd=16 00:35:00.458 04:32:14 -- host/digest.sh@58 -- # bperfpid=61637 00:35:00.458 04:32:14 -- host/digest.sh@60 -- # waitforlisten 61637 /var/tmp/bperf.sock 00:35:00.458 04:32:14 -- common/autotest_common.sh@819 -- # '[' -z 61637 ']' 00:35:00.458 04:32:14 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bperf.sock 00:35:00.458 04:32:14 -- common/autotest_common.sh@824 -- # local max_retries=100 00:35:00.458 04:32:14 -- host/digest.sh@57 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:35:00.458 04:32:14 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:35:00.458 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:35:00.458 04:32:14 -- common/autotest_common.sh@828 -- # xtrace_disable 00:35:00.458 04:32:14 -- common/autotest_common.sh@10 -- # set +x 00:35:00.458 [2024-05-14 04:32:14.830146] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:35:00.458 [2024-05-14 04:32:14.830261] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61637 ] 00:35:00.458 I/O size of 131072 is greater than zero copy threshold (65536). 00:35:00.458 Zero copy mechanism will not be used. 00:35:00.458 EAL: No free 2048 kB hugepages reported on node 1 00:35:00.458 [2024-05-14 04:32:14.941295] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:00.458 [2024-05-14 04:32:15.031930] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:35:01.028 04:32:15 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:35:01.028 04:32:15 -- common/autotest_common.sh@852 -- # return 0 00:35:01.028 04:32:15 -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:35:01.028 04:32:15 -- host/digest.sh@18 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:35:01.288 04:32:15 -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:35:01.288 04:32:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:35:01.288 04:32:15 -- common/autotest_common.sh@10 -- # set +x 00:35:01.288 04:32:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:35:01.288 04:32:15 -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:35:01.288 04:32:15 -- host/digest.sh@18 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:35:01.546 nvme0n1 00:35:01.546 04:32:16 -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:35:01.546 04:32:16 -- common/autotest_common.sh@551 -- # xtrace_disable 00:35:01.546 04:32:16 -- common/autotest_common.sh@10 -- # set +x 00:35:01.546 04:32:16 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:35:01.546 04:32:16 -- host/digest.sh@69 -- # bperf_py perform_tests 00:35:01.546 04:32:16 -- host/digest.sh@19 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:35:01.546 I/O size of 131072 is greater than zero copy threshold (65536). 00:35:01.546 Zero copy mechanism will not be used. 00:35:01.546 Running I/O for 2 seconds... 00:35:01.546 [2024-05-14 04:32:16.117662] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:01.546 [2024-05-14 04:32:16.117811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.546 [2024-05-14 04:32:16.117849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:01.546 [2024-05-14 04:32:16.125401] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:01.546 [2024-05-14 04:32:16.125538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.546 [2024-05-14 04:32:16.125575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:01.546 [2024-05-14 04:32:16.132449] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:01.546 [2024-05-14 04:32:16.132568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.546 [2024-05-14 04:32:16.132594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:01.806 [2024-05-14 04:32:16.139646] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:01.806 [2024-05-14 04:32:16.139784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.806 [2024-05-14 04:32:16.139811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:01.806 [2024-05-14 04:32:16.146641] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:01.806 [2024-05-14 04:32:16.146764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.806 [2024-05-14 04:32:16.146794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:01.806 [2024-05-14 04:32:16.155066] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:01.806 [2024-05-14 04:32:16.155291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.806 [2024-05-14 04:32:16.155318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:01.806 [2024-05-14 04:32:16.163230] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:01.806 [2024-05-14 04:32:16.163385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.807 [2024-05-14 04:32:16.163409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:01.807 [2024-05-14 04:32:16.170403] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:01.807 [2024-05-14 04:32:16.170711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.807 [2024-05-14 04:32:16.170738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:01.807 [2024-05-14 04:32:16.178944] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:01.807 [2024-05-14 04:32:16.179174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.807 [2024-05-14 04:32:16.179203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:01.807 [2024-05-14 04:32:16.186745] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:01.807 [2024-05-14 04:32:16.186872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.807 [2024-05-14 04:32:16.186897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:01.807 [2024-05-14 04:32:16.193364] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:01.807 [2024-05-14 04:32:16.193536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.807 [2024-05-14 04:32:16.193557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:01.807 [2024-05-14 04:32:16.200095] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:01.807 [2024-05-14 04:32:16.200273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.807 [2024-05-14 04:32:16.200298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:01.807 [2024-05-14 04:32:16.206552] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:01.807 [2024-05-14 04:32:16.206692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.807 [2024-05-14 04:32:16.206717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:01.807 [2024-05-14 04:32:16.212875] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:01.807 [2024-05-14 04:32:16.213064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.807 [2024-05-14 04:32:16.213092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:01.807 [2024-05-14 04:32:16.219327] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:01.807 [2024-05-14 04:32:16.219525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.807 [2024-05-14 04:32:16.219549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:01.807 [2024-05-14 04:32:16.225215] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:01.807 [2024-05-14 04:32:16.225390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.807 [2024-05-14 04:32:16.225413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:01.807 [2024-05-14 04:32:16.231710] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:01.807 [2024-05-14 04:32:16.231884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.807 [2024-05-14 04:32:16.231906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:01.807 [2024-05-14 04:32:16.238123] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:01.807 [2024-05-14 04:32:16.238240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.807 [2024-05-14 04:32:16.238263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:01.807 [2024-05-14 04:32:16.244375] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:01.807 [2024-05-14 04:32:16.244611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.807 [2024-05-14 04:32:16.244634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:01.807 [2024-05-14 04:32:16.250456] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:01.807 [2024-05-14 04:32:16.250700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.807 [2024-05-14 04:32:16.250722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:01.807 [2024-05-14 04:32:16.257025] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:01.807 [2024-05-14 04:32:16.257122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.807 [2024-05-14 04:32:16.257144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:01.807 [2024-05-14 04:32:16.263852] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:01.807 [2024-05-14 04:32:16.263932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.807 [2024-05-14 04:32:16.263955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:01.807 [2024-05-14 04:32:16.270842] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:01.807 [2024-05-14 04:32:16.270992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.807 [2024-05-14 04:32:16.271013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:01.807 [2024-05-14 04:32:16.277914] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:01.807 [2024-05-14 04:32:16.278140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.807 [2024-05-14 04:32:16.278169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:01.807 [2024-05-14 04:32:16.284873] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:01.807 [2024-05-14 04:32:16.285075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.807 [2024-05-14 04:32:16.285108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:01.807 [2024-05-14 04:32:16.291797] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:01.807 [2024-05-14 04:32:16.291988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.807 [2024-05-14 04:32:16.292014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:01.807 [2024-05-14 04:32:16.298902] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:01.807 [2024-05-14 04:32:16.299038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.807 [2024-05-14 04:32:16.299060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:01.807 [2024-05-14 04:32:16.305838] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:01.807 [2024-05-14 04:32:16.305949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.807 [2024-05-14 04:32:16.305972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:01.807 [2024-05-14 04:32:16.312188] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:01.807 [2024-05-14 04:32:16.312371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.807 [2024-05-14 04:32:16.312394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:01.807 [2024-05-14 04:32:16.320219] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:01.807 [2024-05-14 04:32:16.320311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.807 [2024-05-14 04:32:16.320334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:01.807 [2024-05-14 04:32:16.326567] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:01.807 [2024-05-14 04:32:16.326751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.807 [2024-05-14 04:32:16.326778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:01.807 [2024-05-14 04:32:16.332770] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:01.807 [2024-05-14 04:32:16.333003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.807 [2024-05-14 04:32:16.333027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:01.807 [2024-05-14 04:32:16.339218] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:01.807 [2024-05-14 04:32:16.339389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.807 [2024-05-14 04:32:16.339412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:01.807 [2024-05-14 04:32:16.346362] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:01.807 [2024-05-14 04:32:16.346518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.807 [2024-05-14 04:32:16.346542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:01.808 [2024-05-14 04:32:16.353235] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:01.808 [2024-05-14 04:32:16.353416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.808 [2024-05-14 04:32:16.353443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:01.808 [2024-05-14 04:32:16.360062] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:01.808 [2024-05-14 04:32:16.360234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.808 [2024-05-14 04:32:16.360274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:01.808 [2024-05-14 04:32:16.366636] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:01.808 [2024-05-14 04:32:16.366814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.808 [2024-05-14 04:32:16.366839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:01.808 [2024-05-14 04:32:16.373235] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:01.808 [2024-05-14 04:32:16.373399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.808 [2024-05-14 04:32:16.373423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:01.808 [2024-05-14 04:32:16.379667] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:01.808 [2024-05-14 04:32:16.379874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.808 [2024-05-14 04:32:16.379898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:01.808 [2024-05-14 04:32:16.386102] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:01.808 [2024-05-14 04:32:16.386319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.808 [2024-05-14 04:32:16.386342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:02.068 [2024-05-14 04:32:16.392720] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:02.068 [2024-05-14 04:32:16.392853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:02.068 [2024-05-14 04:32:16.392876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:02.068 [2024-05-14 04:32:16.399175] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:02.068 [2024-05-14 04:32:16.399349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:02.068 [2024-05-14 04:32:16.399373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:02.068 [2024-05-14 04:32:16.404868] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:02.068 [2024-05-14 04:32:16.404969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:02.068 [2024-05-14 04:32:16.404992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:02.068 [2024-05-14 04:32:16.409584] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:02.068 [2024-05-14 04:32:16.409678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:02.068 [2024-05-14 04:32:16.409703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:02.068 [2024-05-14 04:32:16.413851] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:02.068 [2024-05-14 04:32:16.413955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:02.068 [2024-05-14 04:32:16.413977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:02.068 [2024-05-14 04:32:16.419263] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:02.068 [2024-05-14 04:32:16.419352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:02.068 [2024-05-14 04:32:16.419376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:02.068 [2024-05-14 04:32:16.423411] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:02.068 [2024-05-14 04:32:16.423617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:02.068 [2024-05-14 04:32:16.423640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:02.068 [2024-05-14 04:32:16.427379] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:02.068 [2024-05-14 04:32:16.427550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:02.068 [2024-05-14 04:32:16.427580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:02.068 [2024-05-14 04:32:16.431210] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:02.068 [2024-05-14 04:32:16.431338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:02.068 [2024-05-14 04:32:16.431361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:02.068 [2024-05-14 04:32:16.435084] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:02.068 [2024-05-14 04:32:16.435171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:02.068 [2024-05-14 04:32:16.435197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:02.068 [2024-05-14 04:32:16.439142] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:02.068 [2024-05-14 04:32:16.439261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:02.068 [2024-05-14 04:32:16.439283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:02.068 [2024-05-14 04:32:16.443111] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:02.069 [2024-05-14 04:32:16.443270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:02.069 [2024-05-14 04:32:16.443295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:02.069 [2024-05-14 04:32:16.447149] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:02.069 [2024-05-14 04:32:16.447255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:02.069 [2024-05-14 04:32:16.447277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:02.069 [2024-05-14 04:32:16.451100] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:02.069 [2024-05-14 04:32:16.451166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:02.069 [2024-05-14 04:32:16.451195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:02.069 [2024-05-14 04:32:16.455148] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:02.069 [2024-05-14 04:32:16.455315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:02.069 [2024-05-14 04:32:16.455339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:02.069 [2024-05-14 04:32:16.459138] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:02.069 [2024-05-14 04:32:16.459252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:02.069 [2024-05-14 04:32:16.459277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:02.069 [2024-05-14 04:32:16.463000] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:02.069 [2024-05-14 04:32:16.463116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:02.069 [2024-05-14 04:32:16.463138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:02.069 [2024-05-14 04:32:16.466928] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:02.069 [2024-05-14 04:32:16.467050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:02.069 [2024-05-14 04:32:16.467073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:02.069 [2024-05-14 04:32:16.471079] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:02.069 [2024-05-14 04:32:16.471151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:02.069 [2024-05-14 04:32:16.471172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:02.069 [2024-05-14 04:32:16.475009] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:02.069 [2024-05-14 04:32:16.475103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:02.069 [2024-05-14 04:32:16.475126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:02.069 [2024-05-14 04:32:16.479010] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:02.069 [2024-05-14 04:32:16.479169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:02.069 [2024-05-14 04:32:16.479195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:02.069 [2024-05-14 04:32:16.483074] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:02.069 [2024-05-14 04:32:16.483222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:02.069 [2024-05-14 04:32:16.483247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:02.069 [2024-05-14 04:32:16.487077] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:02.069 [2024-05-14 04:32:16.487211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:02.069 [2024-05-14 04:32:16.487234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:02.069 [2024-05-14 04:32:16.491248] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:02.069 [2024-05-14 04:32:16.491380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:02.069 [2024-05-14 04:32:16.491403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:02.069 [2024-05-14 04:32:16.495231] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:02.069 [2024-05-14 04:32:16.495346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:02.069 [2024-05-14 04:32:16.495371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:02.069 [2024-05-14 04:32:16.499197] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:02.069 [2024-05-14 04:32:16.499273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:02.069 [2024-05-14 04:32:16.499296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:02.069 [2024-05-14 04:32:16.503195] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:02.069 [2024-05-14 04:32:16.503303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:02.069 [2024-05-14 04:32:16.503325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:02.069 [2024-05-14 04:32:16.507007] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:02.069 [2024-05-14 04:32:16.507106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:02.069 [2024-05-14 04:32:16.507129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:02.069 [2024-05-14 04:32:16.511160] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:02.069 [2024-05-14 04:32:16.511309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:02.069 [2024-05-14 04:32:16.511331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:02.069 [2024-05-14 04:32:16.515155] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:02.069 [2024-05-14 04:32:16.515316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:02.069 [2024-05-14 04:32:16.515339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:02.069 [2024-05-14 04:32:16.519254] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:02.069 [2024-05-14 04:32:16.519358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:02.069 [2024-05-14 04:32:16.519379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:02.069 [2024-05-14 04:32:16.523233] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:02.069 [2024-05-14 04:32:16.523341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:02.069 [2024-05-14 04:32:16.523362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:02.069 [2024-05-14 04:32:16.527205] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:02.069 [2024-05-14 04:32:16.527333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:02.069 [2024-05-14 04:32:16.527354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:02.069 [2024-05-14 04:32:16.531117] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:02.069 [2024-05-14 04:32:16.531246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:02.069 [2024-05-14 04:32:16.531271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:02.069 [2024-05-14 04:32:16.535053] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:02.069 [2024-05-14 04:32:16.535119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:02.069 [2024-05-14 04:32:16.535143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:02.069 [2024-05-14 04:32:16.539076] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:02.069 [2024-05-14 04:32:16.539149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:02.069 [2024-05-14 04:32:16.539171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:02.069 [2024-05-14 04:32:16.543120] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:02.069 [2024-05-14 04:32:16.543253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:02.069 [2024-05-14 04:32:16.543275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:02.069 [2024-05-14 04:32:16.547215] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:02.069 [2024-05-14 04:32:16.547359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:02.069 [2024-05-14 04:32:16.547382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:02.069 [2024-05-14 04:32:16.551387] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:02.069 [2024-05-14 04:32:16.551508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:02.069 [2024-05-14 04:32:16.551530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:02.070 [2024-05-14 04:32:16.555232] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:02.070 [2024-05-14 04:32:16.555307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:02.070 [2024-05-14 04:32:16.555330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:02.070 [2024-05-14 04:32:16.559273] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:02.070 [2024-05-14 04:32:16.559402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:02.070 [2024-05-14 04:32:16.559431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:02.070 [2024-05-14 04:32:16.563406] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:02.070 [2024-05-14 04:32:16.563482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:02.070 [2024-05-14 04:32:16.563509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:02.070 [2024-05-14 04:32:16.567544] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:02.070 [2024-05-14 04:32:16.567639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:02.070 [2024-05-14 04:32:16.567662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:02.070 [2024-05-14 04:32:16.571543] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:02.070 [2024-05-14 04:32:16.571634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:02.070 [2024-05-14 04:32:16.571656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:02.070 [2024-05-14 04:32:16.575402] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:02.070 [2024-05-14 04:32:16.575563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:02.070 [2024-05-14 04:32:16.575587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:02.070 [2024-05-14 04:32:16.579467] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:02.070 [2024-05-14 04:32:16.579578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:02.070 [2024-05-14 04:32:16.579604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:02.070 [2024-05-14 04:32:16.583346] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:02.070 [2024-05-14 04:32:16.583521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:02.070 [2024-05-14 04:32:16.583547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:02.070 [2024-05-14 04:32:16.587468] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:02.070 [2024-05-14 04:32:16.587562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:02.070 [2024-05-14 04:32:16.587585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:02.070 [2024-05-14 04:32:16.591536] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:02.070 [2024-05-14 04:32:16.591648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:02.070 [2024-05-14 04:32:16.591678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:02.070 [2024-05-14 04:32:16.595452] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:02.070 [2024-05-14 04:32:16.595575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:02.070 [2024-05-14 04:32:16.595598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:02.070 [2024-05-14 04:32:16.599472] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:02.070 [2024-05-14 04:32:16.599609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:02.070 [2024-05-14 04:32:16.599632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:02.070 [2024-05-14 04:32:16.603632] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:02.070 [2024-05-14 04:32:16.603767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:02.070 [2024-05-14 04:32:16.603788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:02.070 [2024-05-14 04:32:16.608680] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:02.070 [2024-05-14 04:32:16.608880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:02.070 [2024-05-14 04:32:16.608901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:02.070 [2024-05-14 04:32:16.615003] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:02.070 [2024-05-14 04:32:16.615149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:02.070 [2024-05-14 04:32:16.615174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:02.070 [2024-05-14 04:32:16.622275] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:02.070 [2024-05-14 04:32:16.622477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:02.070 [2024-05-14 04:32:16.622500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:02.070 [2024-05-14 04:32:16.629108] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:02.070 [2024-05-14 04:32:16.629285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:02.070 [2024-05-14 04:32:16.629309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:02.070 [2024-05-14 04:32:16.635492] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:02.070 [2024-05-14 04:32:16.635641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:02.070 [2024-05-14 04:32:16.635662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:02.070 [2024-05-14 04:32:16.642191] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:02.070 [2024-05-14 04:32:16.642377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:02.070 [2024-05-14 04:32:16.642402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:02.070 [2024-05-14 04:32:16.648671] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:02.070 [2024-05-14 04:32:16.648847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:02.070 [2024-05-14 04:32:16.648869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:02.330 [2024-05-14 04:32:16.655216] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:02.330 [2024-05-14 04:32:16.655408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:02.330 [2024-05-14 04:32:16.655429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:02.330 [2024-05-14 04:32:16.661688] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:02.330 [2024-05-14 04:32:16.661842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:02.330 [2024-05-14 04:32:16.661865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:02.330 [2024-05-14 04:32:16.668134] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:02.330 [2024-05-14 04:32:16.668279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:02.330 [2024-05-14 04:32:16.668303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:02.330 [2024-05-14 04:32:16.674865] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:02.330 [2024-05-14 04:32:16.675057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:02.330 [2024-05-14 04:32:16.675088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:02.330 [2024-05-14 04:32:16.681984] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:02.330 [2024-05-14 04:32:16.682220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:02.330 [2024-05-14 04:32:16.682248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:02.330 [2024-05-14 04:32:16.688454] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:02.330 [2024-05-14 04:32:16.688606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:02.330 [2024-05-14 04:32:16.688628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:02.330 [2024-05-14 04:32:16.694765] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:02.330 [2024-05-14 04:32:16.694906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:02.330 [2024-05-14 04:32:16.694929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:02.330 [2024-05-14 04:32:16.700442] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:02.330 [2024-05-14 04:32:16.700600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:02.330 [2024-05-14 04:32:16.700623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:02.330 [2024-05-14 04:32:16.704969] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:02.330 [2024-05-14 04:32:16.705097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:02.330 [2024-05-14 04:32:16.705122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:02.330 [2024-05-14 04:32:16.710041] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:02.330 [2024-05-14 04:32:16.710161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:02.330 [2024-05-14 04:32:16.710190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:02.330 [2024-05-14 04:32:16.714859] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:02.330 [2024-05-14 04:32:16.714991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:02.330 [2024-05-14 04:32:16.715015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:02.330 [2024-05-14 04:32:16.719639] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:02.330 [2024-05-14 04:32:16.719758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:02.330 [2024-05-14 04:32:16.719781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:02.330 [2024-05-14 04:32:16.724116] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:02.330 [2024-05-14 04:32:16.724221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:02.330 [2024-05-14 04:32:16.724249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:02.330 [2024-05-14 04:32:16.729239] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:02.330 [2024-05-14 04:32:16.729356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:02.330 [2024-05-14 04:32:16.729379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:02.330 [2024-05-14 04:32:16.734121] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:02.330 [2024-05-14 04:32:16.734247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:02.330 [2024-05-14 04:32:16.734269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:02.330 [2024-05-14 04:32:16.738183] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:02.330 [2024-05-14 04:32:16.738260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:02.330 [2024-05-14 04:32:16.738282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:02.330 [2024-05-14 04:32:16.742101] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:02.330 [2024-05-14 04:32:16.742246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:02.330 [2024-05-14 04:32:16.742268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:02.330 [2024-05-14 04:32:16.746062] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:02.330 [2024-05-14 04:32:16.746173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:02.330 [2024-05-14 04:32:16.746200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:02.331 [2024-05-14 04:32:16.750233] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:02.331 [2024-05-14 04:32:16.750384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:02.331 [2024-05-14 04:32:16.750408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:02.331 [2024-05-14 04:32:16.754275] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:02.331 [2024-05-14 04:32:16.754375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:02.331 [2024-05-14 04:32:16.754397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:02.331 [2024-05-14 04:32:16.758406] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:02.331 [2024-05-14 04:32:16.758523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:02.331 [2024-05-14 04:32:16.758547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:02.331 [2024-05-14 04:32:16.762595] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:02.331 [2024-05-14 04:32:16.762730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:02.331 [2024-05-14 04:32:16.762751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:02.331 [2024-05-14 04:32:16.766437] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:02.331 [2024-05-14 04:32:16.766507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:02.331 [2024-05-14 04:32:16.766529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:02.331 [2024-05-14 04:32:16.770519] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:02.331 [2024-05-14 04:32:16.770632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:02.331 [2024-05-14 04:32:16.770653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:02.331 [2024-05-14 04:32:16.774630] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:02.331 [2024-05-14 04:32:16.774718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:02.331 [2024-05-14 04:32:16.774740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:02.331 [2024-05-14 04:32:16.778628] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:02.331 [2024-05-14 04:32:16.778790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:02.331 [2024-05-14 04:32:16.778821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:02.331 [2024-05-14 04:32:16.782723] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:02.331 [2024-05-14 04:32:16.782880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:02.331 [2024-05-14 04:32:16.782902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:02.331 [2024-05-14 04:32:16.786671] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:02.331 [2024-05-14 04:32:16.786822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:02.331 [2024-05-14 04:32:16.786843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:02.331 [2024-05-14 04:32:16.790668] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:02.331 [2024-05-14 04:32:16.790757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:02.331 [2024-05-14 04:32:16.790780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:02.331 [2024-05-14 04:32:16.794803] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:02.331 [2024-05-14 04:32:16.794893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:02.331 [2024-05-14 04:32:16.794915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:02.331 [2024-05-14 04:32:16.798917] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:02.331 [2024-05-14 04:32:16.798996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:02.331 [2024-05-14 04:32:16.799017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:02.331 [2024-05-14 04:32:16.802911] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:02.331 [2024-05-14 04:32:16.803004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:02.331 [2024-05-14 04:32:16.803026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:02.331 [2024-05-14 04:32:16.806998] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:02.331 [2024-05-14 04:32:16.807121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:02.331 [2024-05-14 04:32:16.807142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:02.331 [2024-05-14 04:32:16.811876] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:02.331 [2024-05-14 04:32:16.812048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:02.331 [2024-05-14 04:32:16.812071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:02.331 [2024-05-14 04:32:16.816119] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:02.331 [2024-05-14 04:32:16.816254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:02.331 [2024-05-14 04:32:16.816277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:02.331 [2024-05-14 04:32:16.821869] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:02.331 [2024-05-14 04:32:16.821983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:02.331 [2024-05-14 04:32:16.822006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:02.331 [2024-05-14 04:32:16.826700] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:02.331 [2024-05-14 04:32:16.826799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:02.331 [2024-05-14 04:32:16.826821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:02.331 [2024-05-14 04:32:16.832959] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:02.331 [2024-05-14 04:32:16.833119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:02.331 [2024-05-14 04:32:16.833141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:02.331 [2024-05-14 04:32:16.839102] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:02.331 [2024-05-14 04:32:16.839228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:02.331 [2024-05-14 04:32:16.839249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:02.331 [2024-05-14 04:32:16.846761] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:02.331 [2024-05-14 04:32:16.846908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:02.331 [2024-05-14 04:32:16.846932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:02.331 [2024-05-14 04:32:16.852292] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:02.331 [2024-05-14 04:32:16.852454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:02.331 [2024-05-14 04:32:16.852477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:02.331 [2024-05-14 04:32:16.856838] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:02.331 [2024-05-14 04:32:16.857021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:02.331 [2024-05-14 04:32:16.857046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:02.331 [2024-05-14 04:32:16.860984] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:02.331 [2024-05-14 04:32:16.861141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:02.331 [2024-05-14 04:32:16.861168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:02.331 [2024-05-14 04:32:16.865048] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:02.331 [2024-05-14 04:32:16.865170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:02.331 [2024-05-14 04:32:16.865205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:02.332 [2024-05-14 04:32:16.869323] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:02.332 [2024-05-14 04:32:16.869439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:02.332 [2024-05-14 04:32:16.869467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:02.332 [2024-05-14 04:32:16.873433] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:02.332 [2024-05-14 04:32:16.873553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:02.332 [2024-05-14 04:32:16.873581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:02.332 [2024-05-14 04:32:16.877598] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:02.332 [2024-05-14 04:32:16.877709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:02.332 [2024-05-14 04:32:16.877733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:02.332 [2024-05-14 04:32:16.881826] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:02.332 [2024-05-14 04:32:16.881931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:02.332 [2024-05-14 04:32:16.881957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:02.332 [2024-05-14 04:32:16.885975] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:02.332 [2024-05-14 04:32:16.886042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:02.332 [2024-05-14 04:32:16.886071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:02.332 [2024-05-14 04:32:16.890330] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:02.332 [2024-05-14 04:32:16.890447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:02.332 [2024-05-14 04:32:16.890476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:02.332 [2024-05-14 04:32:16.894551] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:02.332 [2024-05-14 04:32:16.894718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:02.332 [2024-05-14 04:32:16.894746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:02.332 [2024-05-14 04:32:16.899127] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:02.332 [2024-05-14 04:32:16.899282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:02.332 [2024-05-14 04:32:16.899310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:02.332 [2024-05-14 04:32:16.903846] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:02.332 [2024-05-14 04:32:16.903959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:02.332 [2024-05-14 04:32:16.903987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:02.332 [2024-05-14 04:32:16.909163] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:02.332 [2024-05-14 04:32:16.909297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:02.332 [2024-05-14 04:32:16.909335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:02.332 [2024-05-14 04:32:16.913948] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:02.332 [2024-05-14 04:32:16.914112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:02.332 [2024-05-14 04:32:16.914141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:02.594 [2024-05-14 04:32:16.919736] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:02.594 [2024-05-14 04:32:16.919985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:02.594 [2024-05-14 04:32:16.920008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:02.594 [2024-05-14 04:32:16.926128] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:02.594 [2024-05-14 04:32:16.926261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:02.594 [2024-05-14 04:32:16.926289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:02.594 [2024-05-14 04:32:16.933176] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:02.594 [2024-05-14 04:32:16.933385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:02.594 [2024-05-14 04:32:16.933407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:02.594 [2024-05-14 04:32:16.940105] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:02.594 [2024-05-14 04:32:16.940291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:02.594 [2024-05-14 04:32:16.940319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:02.594 [2024-05-14 04:32:16.946601] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:02.594 [2024-05-14 04:32:16.946740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:02.594 [2024-05-14 04:32:16.946768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:02.594 [2024-05-14 04:32:16.951409] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:02.594 [2024-05-14 04:32:16.951617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:02.594 [2024-05-14 04:32:16.951643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:02.594 [2024-05-14 04:32:16.956153] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:02.594 [2024-05-14 04:32:16.956242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:02.594 [2024-05-14 04:32:16.956268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:02.594 [2024-05-14 04:32:16.961754] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:02.594 [2024-05-14 04:32:16.961849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:02.594 [2024-05-14 04:32:16.961878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:02.594 [2024-05-14 04:32:16.965862] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:02.594 [2024-05-14 04:32:16.965993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:02.594 [2024-05-14 04:32:16.966021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:02.594 [2024-05-14 04:32:16.970247] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:02.594 [2024-05-14 04:32:16.970339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:02.594 [2024-05-14 04:32:16.970362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:02.594 [2024-05-14 04:32:16.974310] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:02.595 [2024-05-14 04:32:16.974460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:02.595 [2024-05-14 04:32:16.974485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:02.595 [2024-05-14 04:32:16.978410] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:02.595 [2024-05-14 04:32:16.978518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:02.595 [2024-05-14 04:32:16.978544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:02.595 [2024-05-14 04:32:16.982582] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:02.595 [2024-05-14 04:32:16.982702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:02.595 [2024-05-14 04:32:16.982726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:02.595 [2024-05-14 04:32:16.986644] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:02.595 [2024-05-14 04:32:16.986796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:02.595 [2024-05-14 04:32:16.986821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:02.595 [2024-05-14 04:32:16.990822] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:02.595 [2024-05-14 04:32:16.990919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:02.595 [2024-05-14 04:32:16.990942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:02.595 [2024-05-14 04:32:16.994728] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:02.595 [2024-05-14 04:32:16.994844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:02.595 [2024-05-14 04:32:16.994867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:02.595 [2024-05-14 04:32:16.998713] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:02.595 [2024-05-14 04:32:16.998787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:02.595 [2024-05-14 04:32:16.998809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:02.595 [2024-05-14 04:32:17.002744] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:02.595 [2024-05-14 04:32:17.002882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:02.595 [2024-05-14 04:32:17.002904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:02.595 [2024-05-14 04:32:17.006652] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:02.595 [2024-05-14 04:32:17.006763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:02.595 [2024-05-14 04:32:17.006788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:02.595 [2024-05-14 04:32:17.010630] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:02.595 [2024-05-14 04:32:17.010767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:02.595 [2024-05-14 04:32:17.010790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:02.595 [2024-05-14 04:32:17.014662] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:02.595 [2024-05-14 04:32:17.014838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:02.595 [2024-05-14 04:32:17.014859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:02.595 [2024-05-14 04:32:17.018970] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:02.595 [2024-05-14 04:32:17.019070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:02.595 [2024-05-14 04:32:17.019098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:02.595 [2024-05-14 04:32:17.022865] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:02.595 [2024-05-14 04:32:17.022977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:02.595 [2024-05-14 04:32:17.023001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:02.595 [2024-05-14 04:32:17.026709] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:02.595 [2024-05-14 04:32:17.026829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:02.595 [2024-05-14 04:32:17.026851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:02.595 [2024-05-14 04:32:17.030704] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:02.595 [2024-05-14 04:32:17.030816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:02.595 [2024-05-14 04:32:17.030838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:02.595 [2024-05-14 04:32:17.034766] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:02.595 [2024-05-14 04:32:17.034832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:02.595 [2024-05-14 04:32:17.034853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:02.595 [2024-05-14 04:32:17.038955] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:02.595 [2024-05-14 04:32:17.039106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:02.595 [2024-05-14 04:32:17.039130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:02.595 [2024-05-14 04:32:17.043062] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:02.595 [2024-05-14 04:32:17.043202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:02.595 [2024-05-14 04:32:17.043226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:02.595 [2024-05-14 04:32:17.047149] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:02.595 [2024-05-14 04:32:17.047258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:02.595 [2024-05-14 04:32:17.047280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:02.595 [2024-05-14 04:32:17.051138] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:02.595 [2024-05-14 04:32:17.051245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:02.595 [2024-05-14 04:32:17.051268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:02.595 [2024-05-14 04:32:17.055253] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:02.595 [2024-05-14 04:32:17.055354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:02.595 [2024-05-14 04:32:17.055376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:02.595 [2024-05-14 04:32:17.059141] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:02.595 [2024-05-14 04:32:17.059209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:02.595 [2024-05-14 04:32:17.059233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:02.595 [2024-05-14 04:32:17.063065] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:02.595 [2024-05-14 04:32:17.063169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:02.595 [2024-05-14 04:32:17.063196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:02.595 [2024-05-14 04:32:17.066980] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:02.595 [2024-05-14 04:32:17.067080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:02.595 [2024-05-14 04:32:17.067103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:02.595 [2024-05-14 04:32:17.071027] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:02.595 [2024-05-14 04:32:17.071129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:02.595 [2024-05-14 04:32:17.071151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:02.595 [2024-05-14 04:32:17.075034] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:02.595 [2024-05-14 04:32:17.075220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:02.595 [2024-05-14 04:32:17.075242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:02.595 [2024-05-14 04:32:17.079191] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:02.595 [2024-05-14 04:32:17.079315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:02.595 [2024-05-14 04:32:17.079338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:02.595 [2024-05-14 04:32:17.083292] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:02.595 [2024-05-14 04:32:17.083398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:02.596 [2024-05-14 04:32:17.083421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:02.596 [2024-05-14 04:32:17.087596] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:02.596 [2024-05-14 04:32:17.087726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:02.596 [2024-05-14 04:32:17.087749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:02.596 [2024-05-14 04:32:17.091580] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:02.596 [2024-05-14 04:32:17.091651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:02.596 [2024-05-14 04:32:17.091674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:02.596 [2024-05-14 04:32:17.095597] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:02.596 [2024-05-14 04:32:17.095724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:02.596 [2024-05-14 04:32:17.095745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:02.596 [2024-05-14 04:32:17.099495] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:02.596 [2024-05-14 04:32:17.099588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:02.596 [2024-05-14 04:32:17.099610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:02.596 [2024-05-14 04:32:17.103641] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:02.596 [2024-05-14 04:32:17.103752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:02.596 [2024-05-14 04:32:17.103777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:02.596 [2024-05-14 04:32:17.107691] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:02.596 [2024-05-14 04:32:17.107805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:02.596 [2024-05-14 04:32:17.107831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:02.596 [2024-05-14 04:32:17.111672] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:02.596 [2024-05-14 04:32:17.111813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:02.596 [2024-05-14 04:32:17.111836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:02.596 [2024-05-14 04:32:17.115605] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:02.596 [2024-05-14 04:32:17.115699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:02.596 [2024-05-14 04:32:17.115722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:02.596 [2024-05-14 04:32:17.119714] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:02.596 [2024-05-14 04:32:17.119783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:02.596 [2024-05-14 04:32:17.119804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:02.596 [2024-05-14 04:32:17.123741] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:02.596 [2024-05-14 04:32:17.123869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:02.596 [2024-05-14 04:32:17.123897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:02.596 [2024-05-14 04:32:17.127974] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:02.596 [2024-05-14 04:32:17.128104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:02.596 [2024-05-14 04:32:17.128126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:02.596 [2024-05-14 04:32:17.132565] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:02.596 [2024-05-14 04:32:17.132742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:02.596 [2024-05-14 04:32:17.132766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:02.596 [2024-05-14 04:32:17.137974] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:02.596 [2024-05-14 04:32:17.138109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:02.596 [2024-05-14 04:32:17.138134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:02.596 [2024-05-14 04:32:17.143948] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:02.596 [2024-05-14 04:32:17.144097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:02.596 [2024-05-14 04:32:17.144123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:02.596 [2024-05-14 04:32:17.148261] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:02.596 [2024-05-14 04:32:17.148423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:02.596 [2024-05-14 04:32:17.148446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:02.596 [2024-05-14 04:32:17.152951] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:02.596 [2024-05-14 04:32:17.153045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:02.596 [2024-05-14 04:32:17.153077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:02.596 [2024-05-14 04:32:17.157249] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:02.596 [2024-05-14 04:32:17.157322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:02.596 [2024-05-14 04:32:17.157346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:02.596 [2024-05-14 04:32:17.162693] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:02.596 [2024-05-14 04:32:17.162828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:02.596 [2024-05-14 04:32:17.162853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:02.596 [2024-05-14 04:32:17.166641] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:02.596 [2024-05-14 04:32:17.166741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:02.596 [2024-05-14 04:32:17.166764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:02.596 [2024-05-14 04:32:17.170533] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:02.596 [2024-05-14 04:32:17.170610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:02.596 [2024-05-14 04:32:17.170632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:02.596 [2024-05-14 04:32:17.174511] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:02.596 [2024-05-14 04:32:17.174639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:02.596 [2024-05-14 04:32:17.174661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:02.596 [2024-05-14 04:32:17.178509] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:02.596 [2024-05-14 04:32:17.178664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:02.596 [2024-05-14 04:32:17.178687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:02.859 [2024-05-14 04:32:17.182643] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:02.859 [2024-05-14 04:32:17.182795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:02.859 [2024-05-14 04:32:17.182817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:02.859 [2024-05-14 04:32:17.186421] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:02.859 [2024-05-14 04:32:17.186541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:02.859 [2024-05-14 04:32:17.186564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:02.859 [2024-05-14 04:32:17.190422] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:02.859 [2024-05-14 04:32:17.190571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:02.859 [2024-05-14 04:32:17.190601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:02.859 [2024-05-14 04:32:17.194425] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:02.859 [2024-05-14 04:32:17.194529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:02.859 [2024-05-14 04:32:17.194552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:02.859 [2024-05-14 04:32:17.198429] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:02.859 [2024-05-14 04:32:17.198500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:02.860 [2024-05-14 04:32:17.198522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:02.860 [2024-05-14 04:32:17.202364] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:02.860 [2024-05-14 04:32:17.202451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:02.860 [2024-05-14 04:32:17.202476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:02.860 [2024-05-14 04:32:17.206314] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:02.860 [2024-05-14 04:32:17.206455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:02.860 [2024-05-14 04:32:17.206477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:02.860 [2024-05-14 04:32:17.210197] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:02.860 [2024-05-14 04:32:17.210361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:02.860 [2024-05-14 04:32:17.210383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:02.860 [2024-05-14 04:32:17.214056] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:02.860 [2024-05-14 04:32:17.214161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:02.860 [2024-05-14 04:32:17.214183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:02.860 [2024-05-14 04:32:17.218014] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:02.860 [2024-05-14 04:32:17.218130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:02.860 [2024-05-14 04:32:17.218152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:02.860 [2024-05-14 04:32:17.222097] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:02.860 [2024-05-14 04:32:17.222215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:02.860 [2024-05-14 04:32:17.222236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:02.860 [2024-05-14 04:32:17.226067] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:02.860 [2024-05-14 04:32:17.226160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:02.860 [2024-05-14 04:32:17.226181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:02.860 [2024-05-14 04:32:17.230169] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:02.860 [2024-05-14 04:32:17.230273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:02.860 [2024-05-14 04:32:17.230294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:02.860 [2024-05-14 04:32:17.234129] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:02.860 [2024-05-14 04:32:17.234277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:02.860 [2024-05-14 04:32:17.234298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:02.860 [2024-05-14 04:32:17.238363] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:02.860 [2024-05-14 04:32:17.238514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:02.860 [2024-05-14 04:32:17.238535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:02.860 [2024-05-14 04:32:17.242269] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:02.860 [2024-05-14 04:32:17.242385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:02.860 [2024-05-14 04:32:17.242407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:02.860 [2024-05-14 04:32:17.247037] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:02.860 [2024-05-14 04:32:17.247210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:02.860 [2024-05-14 04:32:17.247237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:02.860 [2024-05-14 04:32:17.253658] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:02.860 [2024-05-14 04:32:17.253857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:02.860 [2024-05-14 04:32:17.253883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:02.860 [2024-05-14 04:32:17.259736] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:02.860 [2024-05-14 04:32:17.259858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:02.860 [2024-05-14 04:32:17.259883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:02.860 [2024-05-14 04:32:17.266233] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:02.860 [2024-05-14 04:32:17.266349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:02.860 [2024-05-14 04:32:17.266373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:02.860 [2024-05-14 04:32:17.272447] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:02.860 [2024-05-14 04:32:17.272565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:02.860 [2024-05-14 04:32:17.272589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:02.860 [2024-05-14 04:32:17.278488] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:02.860 [2024-05-14 04:32:17.278564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:02.860 [2024-05-14 04:32:17.278591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:02.860 [2024-05-14 04:32:17.284391] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:02.860 [2024-05-14 04:32:17.284513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:02.860 [2024-05-14 04:32:17.284536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:02.860 [2024-05-14 04:32:17.290647] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:02.860 [2024-05-14 04:32:17.290764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:02.860 [2024-05-14 04:32:17.290787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:02.860 [2024-05-14 04:32:17.296511] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:02.860 [2024-05-14 04:32:17.296608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:02.860 [2024-05-14 04:32:17.296630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:02.860 [2024-05-14 04:32:17.302585] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:02.860 [2024-05-14 04:32:17.302713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:02.860 [2024-05-14 04:32:17.302734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:02.860 [2024-05-14 04:32:17.308458] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:02.860 [2024-05-14 04:32:17.308559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:02.860 [2024-05-14 04:32:17.308588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:02.860 [2024-05-14 04:32:17.312333] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:02.860 [2024-05-14 04:32:17.312442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:02.860 [2024-05-14 04:32:17.312468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:02.860 [2024-05-14 04:32:17.316116] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:02.860 [2024-05-14 04:32:17.316202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:02.860 [2024-05-14 04:32:17.316226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:02.860 [2024-05-14 04:32:17.319920] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:02.860 [2024-05-14 04:32:17.320030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:02.860 [2024-05-14 04:32:17.320054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:02.860 [2024-05-14 04:32:17.323818] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:02.860 [2024-05-14 04:32:17.323926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:02.860 [2024-05-14 04:32:17.323949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:02.860 [2024-05-14 04:32:17.327600] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:02.860 [2024-05-14 04:32:17.327755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:02.860 [2024-05-14 04:32:17.327786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:02.861 [2024-05-14 04:32:17.331428] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:02.861 [2024-05-14 04:32:17.331555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:02.861 [2024-05-14 04:32:17.331582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:02.861 [2024-05-14 04:32:17.335131] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:02.861 [2024-05-14 04:32:17.335207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:02.861 [2024-05-14 04:32:17.335230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:02.861 [2024-05-14 04:32:17.338859] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:02.861 [2024-05-14 04:32:17.338961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:02.861 [2024-05-14 04:32:17.338984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:02.861 [2024-05-14 04:32:17.342507] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:02.861 [2024-05-14 04:32:17.342578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:02.861 [2024-05-14 04:32:17.342601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:02.861 [2024-05-14 04:32:17.346349] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:02.861 [2024-05-14 04:32:17.346453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:02.861 [2024-05-14 04:32:17.346475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:02.861 [2024-05-14 04:32:17.350158] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:02.861 [2024-05-14 04:32:17.350303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:02.861 [2024-05-14 04:32:17.350327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:02.861 [2024-05-14 04:32:17.354051] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:02.861 [2024-05-14 04:32:17.354154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:02.861 [2024-05-14 04:32:17.354183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:02.861 [2024-05-14 04:32:17.357877] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:02.861 [2024-05-14 04:32:17.358009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:02.861 [2024-05-14 04:32:17.358034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:02.861 [2024-05-14 04:32:17.361637] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:02.861 [2024-05-14 04:32:17.361771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:02.861 [2024-05-14 04:32:17.361793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:02.861 [2024-05-14 04:32:17.365354] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:02.861 [2024-05-14 04:32:17.365448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:02.861 [2024-05-14 04:32:17.365471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:02.861 [2024-05-14 04:32:17.369127] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:02.861 [2024-05-14 04:32:17.369220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:02.861 [2024-05-14 04:32:17.369247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:02.861 [2024-05-14 04:32:17.372883] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:02.861 [2024-05-14 04:32:17.372993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:02.861 [2024-05-14 04:32:17.373019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:02.861 [2024-05-14 04:32:17.376640] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:02.861 [2024-05-14 04:32:17.376702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:02.861 [2024-05-14 04:32:17.376727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:02.861 [2024-05-14 04:32:17.380371] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:02.861 [2024-05-14 04:32:17.380499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:02.861 [2024-05-14 04:32:17.380521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:02.861 [2024-05-14 04:32:17.384254] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:02.861 [2024-05-14 04:32:17.384330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:02.861 [2024-05-14 04:32:17.384354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:02.861 [2024-05-14 04:32:17.388125] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:02.861 [2024-05-14 04:32:17.388250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:02.861 [2024-05-14 04:32:17.388272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:02.861 [2024-05-14 04:32:17.391882] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:02.861 [2024-05-14 04:32:17.391983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:02.861 [2024-05-14 04:32:17.392004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:02.861 [2024-05-14 04:32:17.395732] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:02.861 [2024-05-14 04:32:17.395848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:02.861 [2024-05-14 04:32:17.395872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:02.861 [2024-05-14 04:32:17.399533] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:02.861 [2024-05-14 04:32:17.399620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:02.861 [2024-05-14 04:32:17.399643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:02.861 [2024-05-14 04:32:17.403382] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:02.861 [2024-05-14 04:32:17.403482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:02.861 [2024-05-14 04:32:17.403504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:02.861 [2024-05-14 04:32:17.407577] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:02.861 [2024-05-14 04:32:17.407698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:02.861 [2024-05-14 04:32:17.407720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:02.861 [2024-05-14 04:32:17.412164] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:02.861 [2024-05-14 04:32:17.412294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:02.861 [2024-05-14 04:32:17.412316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:02.861 [2024-05-14 04:32:17.415963] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:02.861 [2024-05-14 04:32:17.416105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:02.861 [2024-05-14 04:32:17.416127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:02.861 [2024-05-14 04:32:17.421661] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:02.861 [2024-05-14 04:32:17.421814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:02.861 [2024-05-14 04:32:17.421848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:02.861 [2024-05-14 04:32:17.425487] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:02.861 [2024-05-14 04:32:17.425595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:02.861 [2024-05-14 04:32:17.425624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:02.861 [2024-05-14 04:32:17.429474] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:02.861 [2024-05-14 04:32:17.429576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:02.861 [2024-05-14 04:32:17.429603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:02.861 [2024-05-14 04:32:17.433299] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:02.861 [2024-05-14 04:32:17.433405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:02.861 [2024-05-14 04:32:17.433442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:02.861 [2024-05-14 04:32:17.437248] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:02.862 [2024-05-14 04:32:17.437365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:02.862 [2024-05-14 04:32:17.437395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:02.862 [2024-05-14 04:32:17.441171] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:02.862 [2024-05-14 04:32:17.441246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:02.862 [2024-05-14 04:32:17.441276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:03.124 [2024-05-14 04:32:17.445060] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:03.125 [2024-05-14 04:32:17.445178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:03.125 [2024-05-14 04:32:17.445223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:03.125 [2024-05-14 04:32:17.448972] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:03.125 [2024-05-14 04:32:17.449109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:03.125 [2024-05-14 04:32:17.449142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:03.125 [2024-05-14 04:32:17.452897] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:03.125 [2024-05-14 04:32:17.453024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:03.125 [2024-05-14 04:32:17.453054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:03.125 [2024-05-14 04:32:17.456789] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:03.125 [2024-05-14 04:32:17.456866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:03.125 [2024-05-14 04:32:17.456906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:03.125 [2024-05-14 04:32:17.460712] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:03.125 [2024-05-14 04:32:17.460801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:03.125 [2024-05-14 04:32:17.460829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:03.125 [2024-05-14 04:32:17.464442] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:03.125 [2024-05-14 04:32:17.464512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:03.125 [2024-05-14 04:32:17.464565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:03.125 [2024-05-14 04:32:17.468215] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:03.125 [2024-05-14 04:32:17.468294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:03.125 [2024-05-14 04:32:17.468318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:03.125 [2024-05-14 04:32:17.472166] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:03.125 [2024-05-14 04:32:17.472294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:03.125 [2024-05-14 04:32:17.472318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:03.125 [2024-05-14 04:32:17.475963] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:03.125 [2024-05-14 04:32:17.476092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:03.125 [2024-05-14 04:32:17.476115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:03.125 [2024-05-14 04:32:17.479850] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:03.125 [2024-05-14 04:32:17.480006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:03.125 [2024-05-14 04:32:17.480030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:03.125 [2024-05-14 04:32:17.483636] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:03.125 [2024-05-14 04:32:17.483762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:03.125 [2024-05-14 04:32:17.483791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:03.125 [2024-05-14 04:32:17.487370] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:03.125 [2024-05-14 04:32:17.487458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:03.125 [2024-05-14 04:32:17.487484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:03.125 [2024-05-14 04:32:17.491029] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:03.125 [2024-05-14 04:32:17.491171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:03.125 [2024-05-14 04:32:17.491198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:03.125 [2024-05-14 04:32:17.494745] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:03.125 [2024-05-14 04:32:17.494844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:03.125 [2024-05-14 04:32:17.494868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:03.125 [2024-05-14 04:32:17.498576] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:03.125 [2024-05-14 04:32:17.498661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:03.125 [2024-05-14 04:32:17.498684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:03.125 [2024-05-14 04:32:17.502321] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:03.125 [2024-05-14 04:32:17.502385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:03.125 [2024-05-14 04:32:17.502408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:03.125 [2024-05-14 04:32:17.506161] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:03.125 [2024-05-14 04:32:17.506316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:03.125 [2024-05-14 04:32:17.506339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:03.125 [2024-05-14 04:32:17.510032] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:03.125 [2024-05-14 04:32:17.510154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:03.125 [2024-05-14 04:32:17.510177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:03.125 [2024-05-14 04:32:17.513723] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:03.125 [2024-05-14 04:32:17.513881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:03.125 [2024-05-14 04:32:17.513904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:03.125 [2024-05-14 04:32:17.517396] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:03.125 [2024-05-14 04:32:17.517481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:03.125 [2024-05-14 04:32:17.517504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:03.125 [2024-05-14 04:32:17.520983] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:03.125 [2024-05-14 04:32:17.521074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:03.125 [2024-05-14 04:32:17.521096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:03.125 [2024-05-14 04:32:17.524582] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:03.125 [2024-05-14 04:32:17.524676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:03.125 [2024-05-14 04:32:17.524698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:03.125 [2024-05-14 04:32:17.528253] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:03.125 [2024-05-14 04:32:17.528323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:03.125 [2024-05-14 04:32:17.528345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:03.125 [2024-05-14 04:32:17.532079] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:03.125 [2024-05-14 04:32:17.532202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:03.125 [2024-05-14 04:32:17.532227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:03.125 [2024-05-14 04:32:17.535905] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:03.125 [2024-05-14 04:32:17.536012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:03.125 [2024-05-14 04:32:17.536034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:03.125 [2024-05-14 04:32:17.539683] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:03.125 [2024-05-14 04:32:17.539810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:03.125 [2024-05-14 04:32:17.539832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:03.125 [2024-05-14 04:32:17.543425] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:03.125 [2024-05-14 04:32:17.543564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:03.125 [2024-05-14 04:32:17.543585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:03.126 [2024-05-14 04:32:17.547387] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:03.126 [2024-05-14 04:32:17.547478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:03.126 [2024-05-14 04:32:17.547501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:03.126 [2024-05-14 04:32:17.550958] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:03.126 [2024-05-14 04:32:17.551079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:03.126 [2024-05-14 04:32:17.551102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:03.126 [2024-05-14 04:32:17.554675] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:03.126 [2024-05-14 04:32:17.554799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:03.126 [2024-05-14 04:32:17.554822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:03.126 [2024-05-14 04:32:17.558313] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:03.126 [2024-05-14 04:32:17.558439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:03.126 [2024-05-14 04:32:17.558462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:03.126 [2024-05-14 04:32:17.562010] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:03.126 [2024-05-14 04:32:17.562128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:03.126 [2024-05-14 04:32:17.562149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:03.126 [2024-05-14 04:32:17.565647] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:03.126 [2024-05-14 04:32:17.565796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:03.126 [2024-05-14 04:32:17.565820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:03.126 [2024-05-14 04:32:17.569505] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:03.126 [2024-05-14 04:32:17.569672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:03.126 [2024-05-14 04:32:17.569693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:03.126 [2024-05-14 04:32:17.573319] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:03.126 [2024-05-14 04:32:17.573486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:03.126 [2024-05-14 04:32:17.573507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:03.126 [2024-05-14 04:32:17.577050] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:03.126 [2024-05-14 04:32:17.577140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:03.126 [2024-05-14 04:32:17.577163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:03.126 [2024-05-14 04:32:17.580785] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:03.126 [2024-05-14 04:32:17.580898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:03.126 [2024-05-14 04:32:17.580919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:03.126 [2024-05-14 04:32:17.584496] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:03.126 [2024-05-14 04:32:17.584577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:03.126 [2024-05-14 04:32:17.584600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:03.126 [2024-05-14 04:32:17.588201] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:03.126 [2024-05-14 04:32:17.588321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:03.126 [2024-05-14 04:32:17.588343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:03.126 [2024-05-14 04:32:17.592678] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:03.126 [2024-05-14 04:32:17.592802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:03.126 [2024-05-14 04:32:17.592824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:03.126 [2024-05-14 04:32:17.596388] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:03.126 [2024-05-14 04:32:17.596536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:03.126 [2024-05-14 04:32:17.596558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:03.126 [2024-05-14 04:32:17.599986] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:03.126 [2024-05-14 04:32:17.600108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:03.126 [2024-05-14 04:32:17.600135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:03.126 [2024-05-14 04:32:17.603620] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:03.126 [2024-05-14 04:32:17.603747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:03.126 [2024-05-14 04:32:17.603774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:03.126 [2024-05-14 04:32:17.607976] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:03.126 [2024-05-14 04:32:17.608106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:03.126 [2024-05-14 04:32:17.608129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:03.126 [2024-05-14 04:32:17.612088] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:03.126 [2024-05-14 04:32:17.612176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:03.126 [2024-05-14 04:32:17.612203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:03.126 [2024-05-14 04:32:17.616939] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:03.126 [2024-05-14 04:32:17.617008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:03.126 [2024-05-14 04:32:17.617032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:03.126 [2024-05-14 04:32:17.621712] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:03.126 [2024-05-14 04:32:17.621846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:03.126 [2024-05-14 04:32:17.621868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:03.126 [2024-05-14 04:32:17.626898] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:03.126 [2024-05-14 04:32:17.627096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:03.126 [2024-05-14 04:32:17.627117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:03.126 [2024-05-14 04:32:17.633192] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:03.126 [2024-05-14 04:32:17.633307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:03.126 [2024-05-14 04:32:17.633335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:03.126 [2024-05-14 04:32:17.639782] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:03.126 [2024-05-14 04:32:17.639978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:03.126 [2024-05-14 04:32:17.640003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:03.126 [2024-05-14 04:32:17.646566] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:03.126 [2024-05-14 04:32:17.646715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:03.126 [2024-05-14 04:32:17.646738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:03.126 [2024-05-14 04:32:17.651106] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:03.126 [2024-05-14 04:32:17.651287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:03.126 [2024-05-14 04:32:17.651311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:03.126 [2024-05-14 04:32:17.655619] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:03.126 [2024-05-14 04:32:17.655861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:03.126 [2024-05-14 04:32:17.655884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:03.126 [2024-05-14 04:32:17.660105] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:03.126 [2024-05-14 04:32:17.660224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:03.126 [2024-05-14 04:32:17.660246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:03.126 [2024-05-14 04:32:17.665047] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:03.126 [2024-05-14 04:32:17.665131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:03.127 [2024-05-14 04:32:17.665157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:03.127 [2024-05-14 04:32:17.669466] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:03.127 [2024-05-14 04:32:17.669546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:03.127 [2024-05-14 04:32:17.669568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:03.127 [2024-05-14 04:32:17.673307] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:03.127 [2024-05-14 04:32:17.673467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:03.127 [2024-05-14 04:32:17.673495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:03.127 [2024-05-14 04:32:17.677016] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:03.127 [2024-05-14 04:32:17.677164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:03.127 [2024-05-14 04:32:17.677196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:03.127 [2024-05-14 04:32:17.680734] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:03.127 [2024-05-14 04:32:17.680891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:03.127 [2024-05-14 04:32:17.680915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:03.127 [2024-05-14 04:32:17.684583] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:03.127 [2024-05-14 04:32:17.684659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:03.127 [2024-05-14 04:32:17.684682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:03.127 [2024-05-14 04:32:17.688372] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:03.127 [2024-05-14 04:32:17.688455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:03.127 [2024-05-14 04:32:17.688480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:03.127 [2024-05-14 04:32:17.691970] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:03.127 [2024-05-14 04:32:17.692042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:03.127 [2024-05-14 04:32:17.692064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:03.127 [2024-05-14 04:32:17.695763] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:03.127 [2024-05-14 04:32:17.695853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:03.127 [2024-05-14 04:32:17.695875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:03.127 [2024-05-14 04:32:17.699447] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:03.127 [2024-05-14 04:32:17.699527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:03.127 [2024-05-14 04:32:17.699549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:03.127 [2024-05-14 04:32:17.703151] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:03.127 [2024-05-14 04:32:17.703275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:03.127 [2024-05-14 04:32:17.703296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:03.127 [2024-05-14 04:32:17.706993] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:03.127 [2024-05-14 04:32:17.707127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:03.127 [2024-05-14 04:32:17.707149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:03.391 [2024-05-14 04:32:17.710701] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:03.391 [2024-05-14 04:32:17.710854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:03.391 [2024-05-14 04:32:17.710878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:03.391 [2024-05-14 04:32:17.714325] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:03.391 [2024-05-14 04:32:17.714435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:03.391 [2024-05-14 04:32:17.714457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:03.391 [2024-05-14 04:32:17.717928] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:03.391 [2024-05-14 04:32:17.717996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:03.391 [2024-05-14 04:32:17.718019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:03.391 [2024-05-14 04:32:17.721628] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:03.391 [2024-05-14 04:32:17.721730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:03.391 [2024-05-14 04:32:17.721761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:03.391 [2024-05-14 04:32:17.725225] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:03.391 [2024-05-14 04:32:17.725334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:03.391 [2024-05-14 04:32:17.725354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:03.391 [2024-05-14 04:32:17.728890] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:03.391 [2024-05-14 04:32:17.728958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:03.391 [2024-05-14 04:32:17.728986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:03.391 [2024-05-14 04:32:17.733106] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:03.391 [2024-05-14 04:32:17.733275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:03.391 [2024-05-14 04:32:17.733296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:03.391 [2024-05-14 04:32:17.738606] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:03.391 [2024-05-14 04:32:17.738792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:03.391 [2024-05-14 04:32:17.738813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:03.391 [2024-05-14 04:32:17.744790] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:03.391 [2024-05-14 04:32:17.744978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:03.391 [2024-05-14 04:32:17.745008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:03.391 [2024-05-14 04:32:17.750908] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:03.391 [2024-05-14 04:32:17.751081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:03.391 [2024-05-14 04:32:17.751104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:03.391 [2024-05-14 04:32:17.756589] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:03.391 [2024-05-14 04:32:17.756726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:03.391 [2024-05-14 04:32:17.756749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:03.391 [2024-05-14 04:32:17.763385] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:03.391 [2024-05-14 04:32:17.763548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:03.391 [2024-05-14 04:32:17.763570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:03.391 [2024-05-14 04:32:17.769663] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:03.391 [2024-05-14 04:32:17.769877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:03.391 [2024-05-14 04:32:17.769900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:03.391 [2024-05-14 04:32:17.776870] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:03.391 [2024-05-14 04:32:17.777008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:03.391 [2024-05-14 04:32:17.777033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:03.391 [2024-05-14 04:32:17.781653] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:03.391 [2024-05-14 04:32:17.781785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:03.391 [2024-05-14 04:32:17.781810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:03.391 [2024-05-14 04:32:17.785952] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:03.391 [2024-05-14 04:32:17.786088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:03.391 [2024-05-14 04:32:17.786110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:03.391 [2024-05-14 04:32:17.790075] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:03.391 [2024-05-14 04:32:17.790197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:03.391 [2024-05-14 04:32:17.790220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:03.391 [2024-05-14 04:32:17.793931] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:03.391 [2024-05-14 04:32:17.794011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:03.391 [2024-05-14 04:32:17.794033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:03.391 [2024-05-14 04:32:17.797717] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:03.391 [2024-05-14 04:32:17.797806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:03.391 [2024-05-14 04:32:17.797830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:03.391 [2024-05-14 04:32:17.801378] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:03.391 [2024-05-14 04:32:17.801445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:03.391 [2024-05-14 04:32:17.801466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:03.391 [2024-05-14 04:32:17.805096] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:03.391 [2024-05-14 04:32:17.805161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:03.391 [2024-05-14 04:32:17.805182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:03.391 [2024-05-14 04:32:17.808734] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:03.391 [2024-05-14 04:32:17.808830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:03.391 [2024-05-14 04:32:17.808852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:03.392 [2024-05-14 04:32:17.814494] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:03.392 [2024-05-14 04:32:17.814595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:03.392 [2024-05-14 04:32:17.814621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:03.392 [2024-05-14 04:32:17.818380] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:03.392 [2024-05-14 04:32:17.818537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:03.392 [2024-05-14 04:32:17.818558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:03.392 [2024-05-14 04:32:17.822145] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:03.392 [2024-05-14 04:32:17.822243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:03.392 [2024-05-14 04:32:17.822265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:03.392 [2024-05-14 04:32:17.825913] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:03.392 [2024-05-14 04:32:17.826012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:03.392 [2024-05-14 04:32:17.826034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:03.392 [2024-05-14 04:32:17.829610] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:03.392 [2024-05-14 04:32:17.829729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:03.392 [2024-05-14 04:32:17.829750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:03.392 [2024-05-14 04:32:17.833348] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:03.392 [2024-05-14 04:32:17.833481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:03.392 [2024-05-14 04:32:17.833503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:03.392 [2024-05-14 04:32:17.837209] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:03.392 [2024-05-14 04:32:17.837278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:03.392 [2024-05-14 04:32:17.837302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:03.392 [2024-05-14 04:32:17.840903] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:03.392 [2024-05-14 04:32:17.841001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:03.392 [2024-05-14 04:32:17.841024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:03.392 [2024-05-14 04:32:17.844562] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:03.392 [2024-05-14 04:32:17.844659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:03.392 [2024-05-14 04:32:17.844681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:03.392 [2024-05-14 04:32:17.848180] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:03.392 [2024-05-14 04:32:17.848487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:03.392 [2024-05-14 04:32:17.848511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:03.392 [2024-05-14 04:32:17.852002] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:03.392 [2024-05-14 04:32:17.852124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:03.392 [2024-05-14 04:32:17.852146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:03.392 [2024-05-14 04:32:17.855728] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:03.392 [2024-05-14 04:32:17.855808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:03.392 [2024-05-14 04:32:17.855832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:03.392 [2024-05-14 04:32:17.859451] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:03.392 [2024-05-14 04:32:17.859553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:03.392 [2024-05-14 04:32:17.859574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:03.392 [2024-05-14 04:32:17.863175] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:03.392 [2024-05-14 04:32:17.863269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:03.392 [2024-05-14 04:32:17.863290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:03.392 [2024-05-14 04:32:17.867099] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:03.392 [2024-05-14 04:32:17.867162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:03.392 [2024-05-14 04:32:17.867192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:03.392 [2024-05-14 04:32:17.870958] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:03.392 [2024-05-14 04:32:17.871105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:03.392 [2024-05-14 04:32:17.871128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:03.392 [2024-05-14 04:32:17.874649] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:03.392 [2024-05-14 04:32:17.874729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:03.392 [2024-05-14 04:32:17.874751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:03.392 [2024-05-14 04:32:17.878405] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:03.392 [2024-05-14 04:32:17.878546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:03.392 [2024-05-14 04:32:17.878573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:03.392 [2024-05-14 04:32:17.882061] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:03.392 [2024-05-14 04:32:17.882145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:03.392 [2024-05-14 04:32:17.882171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:03.392 [2024-05-14 04:32:17.885839] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:03.392 [2024-05-14 04:32:17.885956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:03.392 [2024-05-14 04:32:17.885977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:03.392 [2024-05-14 04:32:17.889468] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:03.392 [2024-05-14 04:32:17.889627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:03.392 [2024-05-14 04:32:17.889650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:03.392 [2024-05-14 04:32:17.893235] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:03.392 [2024-05-14 04:32:17.893322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:03.392 [2024-05-14 04:32:17.893343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:03.392 [2024-05-14 04:32:17.896871] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:03.392 [2024-05-14 04:32:17.896971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:03.392 [2024-05-14 04:32:17.896994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:03.392 [2024-05-14 04:32:17.900725] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:03.392 [2024-05-14 04:32:17.900819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:03.392 [2024-05-14 04:32:17.900841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:03.392 [2024-05-14 04:32:17.904517] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:03.392 [2024-05-14 04:32:17.904672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:03.392 [2024-05-14 04:32:17.904694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:03.392 [2024-05-14 04:32:17.908361] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:03.392 [2024-05-14 04:32:17.908503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:03.392 [2024-05-14 04:32:17.908525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:03.392 [2024-05-14 04:32:17.911983] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:03.392 [2024-05-14 04:32:17.912105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:03.392 [2024-05-14 04:32:17.912127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:03.392 [2024-05-14 04:32:17.915732] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:03.393 [2024-05-14 04:32:17.915800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:03.393 [2024-05-14 04:32:17.915821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:03.393 [2024-05-14 04:32:17.919524] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:03.393 [2024-05-14 04:32:17.919628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:03.393 [2024-05-14 04:32:17.919650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:03.393 [2024-05-14 04:32:17.923266] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:03.393 [2024-05-14 04:32:17.923350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:03.393 [2024-05-14 04:32:17.923371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:03.393 [2024-05-14 04:32:17.926916] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:03.393 [2024-05-14 04:32:17.926986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:03.393 [2024-05-14 04:32:17.927007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:03.393 [2024-05-14 04:32:17.930632] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:03.393 [2024-05-14 04:32:17.930716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:03.393 [2024-05-14 04:32:17.930739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:03.393 [2024-05-14 04:32:17.934343] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:03.393 [2024-05-14 04:32:17.934486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:03.393 [2024-05-14 04:32:17.934508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:03.393 [2024-05-14 04:32:17.938149] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:03.393 [2024-05-14 04:32:17.938290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:03.393 [2024-05-14 04:32:17.938312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:03.393 [2024-05-14 04:32:17.941787] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:03.393 [2024-05-14 04:32:17.941886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:03.393 [2024-05-14 04:32:17.941910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:03.393 [2024-05-14 04:32:17.945513] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:03.393 [2024-05-14 04:32:17.945637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:03.393 [2024-05-14 04:32:17.945660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:03.393 [2024-05-14 04:32:17.949178] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:03.393 [2024-05-14 04:32:17.949251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:03.393 [2024-05-14 04:32:17.949274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:03.393 [2024-05-14 04:32:17.953004] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:03.393 [2024-05-14 04:32:17.953088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:03.393 [2024-05-14 04:32:17.953111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:03.393 [2024-05-14 04:32:17.956716] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:03.393 [2024-05-14 04:32:17.956825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:03.393 [2024-05-14 04:32:17.956846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:03.393 [2024-05-14 04:32:17.960497] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:03.393 [2024-05-14 04:32:17.960604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:03.393 [2024-05-14 04:32:17.960626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:03.393 [2024-05-14 04:32:17.964284] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:03.393 [2024-05-14 04:32:17.964415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:03.393 [2024-05-14 04:32:17.964439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:03.393 [2024-05-14 04:32:17.967841] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:03.393 [2024-05-14 04:32:17.967961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:03.393 [2024-05-14 04:32:17.967984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:03.393 [2024-05-14 04:32:17.971710] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:03.393 [2024-05-14 04:32:17.971797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:03.393 [2024-05-14 04:32:17.971819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:03.393 [2024-05-14 04:32:17.975353] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:03.393 [2024-05-14 04:32:17.975460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:03.393 [2024-05-14 04:32:17.975481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:03.654 [2024-05-14 04:32:17.978939] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:03.654 [2024-05-14 04:32:17.979040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:03.654 [2024-05-14 04:32:17.979068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:03.654 [2024-05-14 04:32:17.982627] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:03.654 [2024-05-14 04:32:17.982715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:03.654 [2024-05-14 04:32:17.982745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:03.654 [2024-05-14 04:32:17.986344] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:03.654 [2024-05-14 04:32:17.986460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:03.654 [2024-05-14 04:32:17.986484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:03.654 [2024-05-14 04:32:17.990145] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:03.654 [2024-05-14 04:32:17.990262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:03.654 [2024-05-14 04:32:17.990288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:03.654 [2024-05-14 04:32:17.993888] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:03.654 [2024-05-14 04:32:17.994038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:03.654 [2024-05-14 04:32:17.994060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:03.654 [2024-05-14 04:32:17.997673] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:03.654 [2024-05-14 04:32:17.997819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:03.654 [2024-05-14 04:32:17.997842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:03.654 [2024-05-14 04:32:18.001319] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:03.654 [2024-05-14 04:32:18.001418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:03.654 [2024-05-14 04:32:18.001440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:03.654 [2024-05-14 04:32:18.005061] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:03.654 [2024-05-14 04:32:18.005135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:03.654 [2024-05-14 04:32:18.005157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:03.654 [2024-05-14 04:32:18.008700] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:03.654 [2024-05-14 04:32:18.008767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:03.654 [2024-05-14 04:32:18.008789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:03.654 [2024-05-14 04:32:18.012386] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:03.654 [2024-05-14 04:32:18.012473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:03.655 [2024-05-14 04:32:18.012494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:03.655 [2024-05-14 04:32:18.015897] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:03.655 [2024-05-14 04:32:18.016006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:03.655 [2024-05-14 04:32:18.016027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:03.655 [2024-05-14 04:32:18.019548] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:03.655 [2024-05-14 04:32:18.019634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:03.655 [2024-05-14 04:32:18.019657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:03.655 [2024-05-14 04:32:18.023142] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:03.655 [2024-05-14 04:32:18.023292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:03.655 [2024-05-14 04:32:18.023318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:03.655 [2024-05-14 04:32:18.026896] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:03.655 [2024-05-14 04:32:18.027073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:03.655 [2024-05-14 04:32:18.027097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:03.655 [2024-05-14 04:32:18.030596] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:03.655 [2024-05-14 04:32:18.030713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:03.655 [2024-05-14 04:32:18.030734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:03.655 [2024-05-14 04:32:18.034354] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:03.655 [2024-05-14 04:32:18.034473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:03.655 [2024-05-14 04:32:18.034494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:03.655 [2024-05-14 04:32:18.038117] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:03.655 [2024-05-14 04:32:18.038194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:03.655 [2024-05-14 04:32:18.038215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:03.655 [2024-05-14 04:32:18.041808] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:03.655 [2024-05-14 04:32:18.041910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:03.655 [2024-05-14 04:32:18.041931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:03.655 [2024-05-14 04:32:18.045526] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:03.655 [2024-05-14 04:32:18.045616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:03.655 [2024-05-14 04:32:18.045637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:03.655 [2024-05-14 04:32:18.049424] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:03.655 [2024-05-14 04:32:18.049581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:03.655 [2024-05-14 04:32:18.049604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:03.655 [2024-05-14 04:32:18.053078] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:03.655 [2024-05-14 04:32:18.053181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:03.655 [2024-05-14 04:32:18.053209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:03.655 [2024-05-14 04:32:18.056802] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:03.655 [2024-05-14 04:32:18.056947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:03.655 [2024-05-14 04:32:18.056968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:03.655 [2024-05-14 04:32:18.060377] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:03.655 [2024-05-14 04:32:18.060488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:03.655 [2024-05-14 04:32:18.060510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:03.655 [2024-05-14 04:32:18.064125] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:03.655 [2024-05-14 04:32:18.064199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:03.655 [2024-05-14 04:32:18.064221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:03.655 [2024-05-14 04:32:18.068023] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:03.655 [2024-05-14 04:32:18.068122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:03.655 [2024-05-14 04:32:18.068144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:03.655 [2024-05-14 04:32:18.071805] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:03.655 [2024-05-14 04:32:18.071897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:03.655 [2024-05-14 04:32:18.071920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:03.655 [2024-05-14 04:32:18.075552] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:03.655 [2024-05-14 04:32:18.075631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:03.655 [2024-05-14 04:32:18.075654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:03.655 [2024-05-14 04:32:18.079319] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:03.655 [2024-05-14 04:32:18.079424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:03.655 [2024-05-14 04:32:18.079447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:03.655 [2024-05-14 04:32:18.083120] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:03.655 [2024-05-14 04:32:18.083223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:03.655 [2024-05-14 04:32:18.083246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:03.655 [2024-05-14 04:32:18.086934] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:03.655 [2024-05-14 04:32:18.087032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:03.655 [2024-05-14 04:32:18.087058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:03.655 [2024-05-14 04:32:18.090574] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:03.655 [2024-05-14 04:32:18.090668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:03.655 [2024-05-14 04:32:18.090690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:03.655 [2024-05-14 04:32:18.094303] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:03.655 [2024-05-14 04:32:18.094387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:03.655 [2024-05-14 04:32:18.094408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:03.655 [2024-05-14 04:32:18.097999] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:03.655 [2024-05-14 04:32:18.098087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:03.655 [2024-05-14 04:32:18.098111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:03.655 [2024-05-14 04:32:18.101771] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:35:03.655 [2024-05-14 04:32:18.101835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:03.655 [2024-05-14 04:32:18.101861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:03.655 00:35:03.655 Latency(us) 00:35:03.655 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:03.655 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:35:03.655 nvme0n1 : 2.00 6831.75 853.97 0.00 0.00 2338.10 1586.66 12279.38 00:35:03.655 =================================================================================================================== 00:35:03.655 Total : 6831.75 853.97 0.00 0.00 2338.10 1586.66 12279.38 00:35:03.655 0 00:35:03.655 04:32:18 -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:35:03.655 04:32:18 -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:35:03.655 | .driver_specific 00:35:03.655 | .nvme_error 00:35:03.655 | .status_code 00:35:03.655 | .command_transient_transport_error' 00:35:03.655 04:32:18 -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:35:03.655 04:32:18 -- host/digest.sh@18 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:35:03.914 04:32:18 -- host/digest.sh@71 -- # (( 441 > 0 )) 00:35:03.914 04:32:18 -- host/digest.sh@73 -- # killprocess 61637 00:35:03.914 04:32:18 -- common/autotest_common.sh@926 -- # '[' -z 61637 ']' 00:35:03.914 04:32:18 -- common/autotest_common.sh@930 -- # kill -0 61637 00:35:03.914 04:32:18 -- common/autotest_common.sh@931 -- # uname 00:35:03.914 04:32:18 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:35:03.914 04:32:18 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 61637 00:35:03.914 04:32:18 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:35:03.914 04:32:18 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:35:03.914 04:32:18 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 61637' 00:35:03.914 killing process with pid 61637 00:35:03.914 04:32:18 -- common/autotest_common.sh@945 -- # kill 61637 00:35:03.914 Received shutdown signal, test time was about 2.000000 seconds 00:35:03.914 00:35:03.914 Latency(us) 00:35:03.914 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:03.914 =================================================================================================================== 00:35:03.914 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:35:03.914 04:32:18 -- common/autotest_common.sh@950 -- # wait 61637 00:35:04.173 04:32:18 -- host/digest.sh@115 -- # killprocess 59167 00:35:04.173 04:32:18 -- common/autotest_common.sh@926 -- # '[' -z 59167 ']' 00:35:04.173 04:32:18 -- common/autotest_common.sh@930 -- # kill -0 59167 00:35:04.173 04:32:18 -- common/autotest_common.sh@931 -- # uname 00:35:04.173 04:32:18 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:35:04.173 04:32:18 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 59167 00:35:04.173 04:32:18 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:35:04.173 04:32:18 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:35:04.173 04:32:18 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 59167' 00:35:04.173 killing process with pid 59167 00:35:04.173 04:32:18 -- common/autotest_common.sh@945 -- # kill 59167 00:35:04.173 04:32:18 -- common/autotest_common.sh@950 -- # wait 59167 00:35:04.861 00:35:04.861 real 0m16.956s 00:35:04.861 user 0m32.473s 00:35:04.861 sys 0m3.426s 00:35:04.861 04:32:19 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:35:04.861 04:32:19 -- common/autotest_common.sh@10 -- # set +x 00:35:04.861 ************************************ 00:35:04.861 END TEST nvmf_digest_error 00:35:04.861 ************************************ 00:35:04.861 04:32:19 -- host/digest.sh@138 -- # trap - SIGINT SIGTERM EXIT 00:35:04.861 04:32:19 -- host/digest.sh@139 -- # nvmftestfini 00:35:04.861 04:32:19 -- nvmf/common.sh@476 -- # nvmfcleanup 00:35:04.861 04:32:19 -- nvmf/common.sh@116 -- # sync 00:35:04.861 04:32:19 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:35:04.861 04:32:19 -- nvmf/common.sh@119 -- # set +e 00:35:04.861 04:32:19 -- nvmf/common.sh@120 -- # for i in {1..20} 00:35:04.861 04:32:19 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:35:04.861 rmmod nvme_tcp 00:35:04.861 rmmod nvme_fabrics 00:35:04.861 rmmod nvme_keyring 00:35:04.861 04:32:19 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:35:04.861 04:32:19 -- nvmf/common.sh@123 -- # set -e 00:35:04.861 04:32:19 -- nvmf/common.sh@124 -- # return 0 00:35:04.861 04:32:19 -- nvmf/common.sh@477 -- # '[' -n 59167 ']' 00:35:04.861 04:32:19 -- nvmf/common.sh@478 -- # killprocess 59167 00:35:04.861 04:32:19 -- common/autotest_common.sh@926 -- # '[' -z 59167 ']' 00:35:04.861 04:32:19 -- common/autotest_common.sh@930 -- # kill -0 59167 00:35:04.861 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/common/autotest_common.sh: line 930: kill: (59167) - No such process 00:35:04.861 04:32:19 -- common/autotest_common.sh@953 -- # echo 'Process with pid 59167 is not found' 00:35:04.861 Process with pid 59167 is not found 00:35:04.861 04:32:19 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:35:04.861 04:32:19 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:35:04.861 04:32:19 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:35:04.861 04:32:19 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:35:04.861 04:32:19 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:35:04.861 04:32:19 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:04.861 04:32:19 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:35:04.861 04:32:19 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:06.767 04:32:21 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:35:06.767 00:35:06.767 real 1m13.218s 00:35:06.767 user 1m42.477s 00:35:06.767 sys 0m11.758s 00:35:06.767 04:32:21 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:35:06.767 04:32:21 -- common/autotest_common.sh@10 -- # set +x 00:35:06.767 ************************************ 00:35:06.767 END TEST nvmf_digest 00:35:06.767 ************************************ 00:35:06.767 04:32:21 -- nvmf/nvmf.sh@109 -- # [[ 0 -eq 1 ]] 00:35:06.767 04:32:21 -- nvmf/nvmf.sh@114 -- # [[ 0 -eq 1 ]] 00:35:06.767 04:32:21 -- nvmf/nvmf.sh@119 -- # [[ phy-fallback == phy ]] 00:35:06.767 04:32:21 -- nvmf/nvmf.sh@126 -- # timing_exit host 00:35:06.767 04:32:21 -- common/autotest_common.sh@718 -- # xtrace_disable 00:35:06.767 04:32:21 -- common/autotest_common.sh@10 -- # set +x 00:35:07.026 04:32:21 -- nvmf/nvmf.sh@128 -- # trap - SIGINT SIGTERM EXIT 00:35:07.026 00:35:07.026 real 21m57.479s 00:35:07.026 user 60m37.593s 00:35:07.026 sys 4m44.154s 00:35:07.026 04:32:21 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:35:07.026 04:32:21 -- common/autotest_common.sh@10 -- # set +x 00:35:07.026 ************************************ 00:35:07.026 END TEST nvmf_tcp 00:35:07.026 ************************************ 00:35:07.026 04:32:21 -- spdk/autotest.sh@296 -- # [[ 0 -eq 0 ]] 00:35:07.026 04:32:21 -- spdk/autotest.sh@297 -- # run_test spdkcli_nvmf_tcp /var/jenkins/workspace/dsa-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:35:07.026 04:32:21 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:35:07.026 04:32:21 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:35:07.026 04:32:21 -- common/autotest_common.sh@10 -- # set +x 00:35:07.026 ************************************ 00:35:07.026 START TEST spdkcli_nvmf_tcp 00:35:07.026 ************************************ 00:35:07.026 04:32:21 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:35:07.026 * Looking for test storage... 00:35:07.026 * Found test storage at /var/jenkins/workspace/dsa-phy-autotest/spdk/test/spdkcli 00:35:07.026 04:32:21 -- spdkcli/nvmf.sh@9 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/test/spdkcli/common.sh 00:35:07.026 04:32:21 -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/dsa-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:35:07.026 04:32:21 -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/dsa-phy-autotest/spdk/test/json_config/clear_config.py 00:35:07.026 04:32:21 -- spdkcli/nvmf.sh@10 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/common.sh 00:35:07.026 04:32:21 -- nvmf/common.sh@7 -- # uname -s 00:35:07.026 04:32:21 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:35:07.026 04:32:21 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:35:07.026 04:32:21 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:35:07.026 04:32:21 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:35:07.026 04:32:21 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:35:07.026 04:32:21 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:35:07.026 04:32:21 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:35:07.026 04:32:21 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:35:07.026 04:32:21 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:35:07.026 04:32:21 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:35:07.026 04:32:21 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80ef6226-405e-ee11-906e-a4bf01973fda 00:35:07.026 04:32:21 -- nvmf/common.sh@18 -- # NVME_HOSTID=80ef6226-405e-ee11-906e-a4bf01973fda 00:35:07.026 04:32:21 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:35:07.026 04:32:21 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:35:07.026 04:32:21 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:35:07.026 04:32:21 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/common.sh 00:35:07.026 04:32:21 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:07.026 04:32:21 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:07.026 04:32:21 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:07.026 04:32:21 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:07.026 04:32:21 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:07.026 04:32:21 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:07.026 04:32:21 -- paths/export.sh@5 -- # export PATH 00:35:07.026 04:32:21 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:07.026 04:32:21 -- nvmf/common.sh@46 -- # : 0 00:35:07.026 04:32:21 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:35:07.026 04:32:21 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:35:07.026 04:32:21 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:35:07.026 04:32:21 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:35:07.026 04:32:21 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:35:07.026 04:32:21 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:35:07.026 04:32:21 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:35:07.026 04:32:21 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:35:07.026 04:32:21 -- spdkcli/nvmf.sh@12 -- # MATCH_FILE=spdkcli_nvmf.test 00:35:07.026 04:32:21 -- spdkcli/nvmf.sh@13 -- # SPDKCLI_BRANCH=/nvmf 00:35:07.026 04:32:21 -- spdkcli/nvmf.sh@15 -- # trap cleanup EXIT 00:35:07.026 04:32:21 -- spdkcli/nvmf.sh@17 -- # timing_enter run_nvmf_tgt 00:35:07.026 04:32:21 -- common/autotest_common.sh@712 -- # xtrace_disable 00:35:07.026 04:32:21 -- common/autotest_common.sh@10 -- # set +x 00:35:07.027 04:32:21 -- spdkcli/nvmf.sh@18 -- # run_nvmf_tgt 00:35:07.027 04:32:21 -- spdkcli/common.sh@33 -- # nvmf_tgt_pid=63094 00:35:07.027 04:32:21 -- spdkcli/common.sh@34 -- # waitforlisten 63094 00:35:07.027 04:32:21 -- common/autotest_common.sh@819 -- # '[' -z 63094 ']' 00:35:07.027 04:32:21 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:07.027 04:32:21 -- common/autotest_common.sh@824 -- # local max_retries=100 00:35:07.027 04:32:21 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:07.027 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:07.027 04:32:21 -- common/autotest_common.sh@828 -- # xtrace_disable 00:35:07.027 04:32:21 -- common/autotest_common.sh@10 -- # set +x 00:35:07.027 04:32:21 -- spdkcli/common.sh@32 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x3 -p 0 00:35:07.027 [2024-05-14 04:32:21.551798] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:35:07.027 [2024-05-14 04:32:21.551917] [ DPDK EAL parameters: nvmf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63094 ] 00:35:07.287 EAL: No free 2048 kB hugepages reported on node 1 00:35:07.287 [2024-05-14 04:32:21.665201] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:35:07.288 [2024-05-14 04:32:21.756650] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:35:07.288 [2024-05-14 04:32:21.756853] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:35:07.288 [2024-05-14 04:32:21.756858] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:35:07.859 04:32:22 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:35:07.859 04:32:22 -- common/autotest_common.sh@852 -- # return 0 00:35:07.859 04:32:22 -- spdkcli/nvmf.sh@19 -- # timing_exit run_nvmf_tgt 00:35:07.859 04:32:22 -- common/autotest_common.sh@718 -- # xtrace_disable 00:35:07.859 04:32:22 -- common/autotest_common.sh@10 -- # set +x 00:35:07.859 04:32:22 -- spdkcli/nvmf.sh@21 -- # NVMF_TARGET_IP=127.0.0.1 00:35:07.859 04:32:22 -- spdkcli/nvmf.sh@22 -- # [[ tcp == \r\d\m\a ]] 00:35:07.859 04:32:22 -- spdkcli/nvmf.sh@27 -- # timing_enter spdkcli_create_nvmf_config 00:35:07.859 04:32:22 -- common/autotest_common.sh@712 -- # xtrace_disable 00:35:07.859 04:32:22 -- common/autotest_common.sh@10 -- # set +x 00:35:07.859 04:32:22 -- spdkcli/nvmf.sh@65 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 32 512 Malloc1'\'' '\''Malloc1'\'' True 00:35:07.859 '\''/bdevs/malloc create 32 512 Malloc2'\'' '\''Malloc2'\'' True 00:35:07.859 '\''/bdevs/malloc create 32 512 Malloc3'\'' '\''Malloc3'\'' True 00:35:07.859 '\''/bdevs/malloc create 32 512 Malloc4'\'' '\''Malloc4'\'' True 00:35:07.859 '\''/bdevs/malloc create 32 512 Malloc5'\'' '\''Malloc5'\'' True 00:35:07.859 '\''/bdevs/malloc create 32 512 Malloc6'\'' '\''Malloc6'\'' True 00:35:07.859 '\''nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192'\'' '\'''\'' True 00:35:07.859 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:35:07.859 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1'\'' '\''Malloc3'\'' True 00:35:07.859 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2'\'' '\''Malloc4'\'' True 00:35:07.859 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:35:07.859 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:35:07.859 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2'\'' '\''Malloc2'\'' True 00:35:07.859 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:35:07.859 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:35:07.859 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1'\'' '\''Malloc1'\'' True 00:35:07.859 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:35:07.859 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:35:07.859 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:35:07.859 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:35:07.859 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True'\'' '\''Allow any host'\'' 00:35:07.859 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False'\'' '\''Allow any host'\'' True 00:35:07.859 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:35:07.859 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4'\'' '\''127.0.0.1:4262'\'' True 00:35:07.859 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:35:07.859 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5'\'' '\''Malloc5'\'' True 00:35:07.859 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6'\'' '\''Malloc6'\'' True 00:35:07.859 '\''/nvmf/referral create tcp 127.0.0.2 4030 IPv4'\'' 00:35:07.859 ' 00:35:08.120 [2024-05-14 04:32:22.628763] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:35:10.659 [2024-05-14 04:32:24.675267] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:11.594 [2024-05-14 04:32:25.837131] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4260 *** 00:35:13.499 [2024-05-14 04:32:27.968142] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4261 *** 00:35:15.404 [2024-05-14 04:32:29.798839] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4262 *** 00:35:16.781 Executing command: ['/bdevs/malloc create 32 512 Malloc1', 'Malloc1', True] 00:35:16.781 Executing command: ['/bdevs/malloc create 32 512 Malloc2', 'Malloc2', True] 00:35:16.781 Executing command: ['/bdevs/malloc create 32 512 Malloc3', 'Malloc3', True] 00:35:16.781 Executing command: ['/bdevs/malloc create 32 512 Malloc4', 'Malloc4', True] 00:35:16.781 Executing command: ['/bdevs/malloc create 32 512 Malloc5', 'Malloc5', True] 00:35:16.781 Executing command: ['/bdevs/malloc create 32 512 Malloc6', 'Malloc6', True] 00:35:16.782 Executing command: ['nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192', '', True] 00:35:16.782 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode1', True] 00:35:16.782 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1', 'Malloc3', True] 00:35:16.782 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2', 'Malloc4', True] 00:35:16.782 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:35:16.782 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:35:16.782 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2', 'Malloc2', True] 00:35:16.782 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:35:16.782 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:35:16.782 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1', 'Malloc1', True] 00:35:16.782 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:35:16.782 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:35:16.782 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1', 'nqn.2014-08.org.spdk:cnode1', True] 00:35:16.782 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:35:16.782 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True', 'Allow any host', False] 00:35:16.782 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False', 'Allow any host', True] 00:35:16.782 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:35:16.782 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4', '127.0.0.1:4262', True] 00:35:16.782 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:35:16.782 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5', 'Malloc5', True] 00:35:16.782 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6', 'Malloc6', True] 00:35:16.782 Executing command: ['/nvmf/referral create tcp 127.0.0.2 4030 IPv4', False] 00:35:16.782 04:32:31 -- spdkcli/nvmf.sh@66 -- # timing_exit spdkcli_create_nvmf_config 00:35:16.782 04:32:31 -- common/autotest_common.sh@718 -- # xtrace_disable 00:35:16.782 04:32:31 -- common/autotest_common.sh@10 -- # set +x 00:35:16.782 04:32:31 -- spdkcli/nvmf.sh@68 -- # timing_enter spdkcli_check_match 00:35:16.782 04:32:31 -- common/autotest_common.sh@712 -- # xtrace_disable 00:35:16.782 04:32:31 -- common/autotest_common.sh@10 -- # set +x 00:35:16.782 04:32:31 -- spdkcli/nvmf.sh@69 -- # check_match 00:35:16.782 04:32:31 -- spdkcli/common.sh@44 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/spdkcli.py ll /nvmf 00:35:17.352 04:32:31 -- spdkcli/common.sh@45 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/app/match/match /var/jenkins/workspace/dsa-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test.match 00:35:17.352 04:32:31 -- spdkcli/common.sh@46 -- # rm -f /var/jenkins/workspace/dsa-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test 00:35:17.352 04:32:31 -- spdkcli/nvmf.sh@70 -- # timing_exit spdkcli_check_match 00:35:17.352 04:32:31 -- common/autotest_common.sh@718 -- # xtrace_disable 00:35:17.352 04:32:31 -- common/autotest_common.sh@10 -- # set +x 00:35:17.352 04:32:31 -- spdkcli/nvmf.sh@72 -- # timing_enter spdkcli_clear_nvmf_config 00:35:17.352 04:32:31 -- common/autotest_common.sh@712 -- # xtrace_disable 00:35:17.352 04:32:31 -- common/autotest_common.sh@10 -- # set +x 00:35:17.352 04:32:31 -- spdkcli/nvmf.sh@87 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1'\'' '\''Malloc3'\'' 00:35:17.352 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all'\'' '\''Malloc4'\'' 00:35:17.352 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:35:17.352 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' 00:35:17.352 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262'\'' '\''127.0.0.1:4262'\'' 00:35:17.352 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all'\'' '\''127.0.0.1:4261'\'' 00:35:17.352 '\''/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3'\'' '\''nqn.2014-08.org.spdk:cnode3'\'' 00:35:17.352 '\''/nvmf/subsystem delete_all'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:35:17.352 '\''/bdevs/malloc delete Malloc6'\'' '\''Malloc6'\'' 00:35:17.352 '\''/bdevs/malloc delete Malloc5'\'' '\''Malloc5'\'' 00:35:17.352 '\''/bdevs/malloc delete Malloc4'\'' '\''Malloc4'\'' 00:35:17.352 '\''/bdevs/malloc delete Malloc3'\'' '\''Malloc3'\'' 00:35:17.352 '\''/bdevs/malloc delete Malloc2'\'' '\''Malloc2'\'' 00:35:17.352 '\''/bdevs/malloc delete Malloc1'\'' '\''Malloc1'\'' 00:35:17.352 ' 00:35:22.624 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1', 'Malloc3', False] 00:35:22.624 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all', 'Malloc4', False] 00:35:22.624 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', False] 00:35:22.624 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all', 'nqn.2014-08.org.spdk:cnode1', False] 00:35:22.624 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262', '127.0.0.1:4262', False] 00:35:22.624 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all', '127.0.0.1:4261', False] 00:35:22.624 Executing command: ['/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3', 'nqn.2014-08.org.spdk:cnode3', False] 00:35:22.624 Executing command: ['/nvmf/subsystem delete_all', 'nqn.2014-08.org.spdk:cnode2', False] 00:35:22.624 Executing command: ['/bdevs/malloc delete Malloc6', 'Malloc6', False] 00:35:22.624 Executing command: ['/bdevs/malloc delete Malloc5', 'Malloc5', False] 00:35:22.624 Executing command: ['/bdevs/malloc delete Malloc4', 'Malloc4', False] 00:35:22.624 Executing command: ['/bdevs/malloc delete Malloc3', 'Malloc3', False] 00:35:22.624 Executing command: ['/bdevs/malloc delete Malloc2', 'Malloc2', False] 00:35:22.624 Executing command: ['/bdevs/malloc delete Malloc1', 'Malloc1', False] 00:35:22.624 04:32:36 -- spdkcli/nvmf.sh@88 -- # timing_exit spdkcli_clear_nvmf_config 00:35:22.624 04:32:36 -- common/autotest_common.sh@718 -- # xtrace_disable 00:35:22.624 04:32:36 -- common/autotest_common.sh@10 -- # set +x 00:35:22.624 04:32:36 -- spdkcli/nvmf.sh@90 -- # killprocess 63094 00:35:22.624 04:32:36 -- common/autotest_common.sh@926 -- # '[' -z 63094 ']' 00:35:22.624 04:32:36 -- common/autotest_common.sh@930 -- # kill -0 63094 00:35:22.624 04:32:36 -- common/autotest_common.sh@931 -- # uname 00:35:22.624 04:32:36 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:35:22.624 04:32:36 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 63094 00:35:22.624 04:32:36 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:35:22.624 04:32:36 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:35:22.624 04:32:36 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 63094' 00:35:22.624 killing process with pid 63094 00:35:22.624 04:32:36 -- common/autotest_common.sh@945 -- # kill 63094 00:35:22.624 [2024-05-14 04:32:36.822116] app.c: 883:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:35:22.624 04:32:36 -- common/autotest_common.sh@950 -- # wait 63094 00:35:22.883 04:32:37 -- spdkcli/nvmf.sh@1 -- # cleanup 00:35:22.883 04:32:37 -- spdkcli/common.sh@10 -- # '[' -n '' ']' 00:35:22.883 04:32:37 -- spdkcli/common.sh@13 -- # '[' -n 63094 ']' 00:35:22.883 04:32:37 -- spdkcli/common.sh@14 -- # killprocess 63094 00:35:22.883 04:32:37 -- common/autotest_common.sh@926 -- # '[' -z 63094 ']' 00:35:22.883 04:32:37 -- common/autotest_common.sh@930 -- # kill -0 63094 00:35:22.883 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/common/autotest_common.sh: line 930: kill: (63094) - No such process 00:35:22.883 04:32:37 -- common/autotest_common.sh@953 -- # echo 'Process with pid 63094 is not found' 00:35:22.883 Process with pid 63094 is not found 00:35:22.883 04:32:37 -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:35:22.883 04:32:37 -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:35:22.883 04:32:37 -- spdkcli/common.sh@22 -- # rm -f /var/jenkins/workspace/dsa-phy-autotest/spdk/test/spdkcli/spdkcli_nvmf.test /var/jenkins/workspace/dsa-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:35:22.883 00:35:22.883 real 0m15.882s 00:35:22.883 user 0m32.172s 00:35:22.883 sys 0m0.748s 00:35:22.883 04:32:37 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:35:22.883 04:32:37 -- common/autotest_common.sh@10 -- # set +x 00:35:22.883 ************************************ 00:35:22.883 END TEST spdkcli_nvmf_tcp 00:35:22.883 ************************************ 00:35:22.883 04:32:37 -- spdk/autotest.sh@298 -- # run_test nvmf_identify_passthru /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:35:22.883 04:32:37 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:35:22.883 04:32:37 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:35:22.883 04:32:37 -- common/autotest_common.sh@10 -- # set +x 00:35:22.883 ************************************ 00:35:22.883 START TEST nvmf_identify_passthru 00:35:22.883 ************************************ 00:35:22.883 04:32:37 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:35:22.883 * Looking for test storage... 00:35:22.883 * Found test storage at /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target 00:35:22.883 04:32:37 -- target/identify_passthru.sh@9 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/common.sh 00:35:22.883 04:32:37 -- nvmf/common.sh@7 -- # uname -s 00:35:22.883 04:32:37 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:35:22.883 04:32:37 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:35:22.883 04:32:37 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:35:22.883 04:32:37 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:35:22.883 04:32:37 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:35:22.883 04:32:37 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:35:22.883 04:32:37 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:35:22.883 04:32:37 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:35:22.883 04:32:37 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:35:22.883 04:32:37 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:35:22.883 04:32:37 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80ef6226-405e-ee11-906e-a4bf01973fda 00:35:22.883 04:32:37 -- nvmf/common.sh@18 -- # NVME_HOSTID=80ef6226-405e-ee11-906e-a4bf01973fda 00:35:22.883 04:32:37 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:35:22.883 04:32:37 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:35:22.883 04:32:37 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:35:22.883 04:32:37 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/common.sh 00:35:22.883 04:32:37 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:22.883 04:32:37 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:22.883 04:32:37 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:22.883 04:32:37 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:22.883 04:32:37 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:22.883 04:32:37 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:22.883 04:32:37 -- paths/export.sh@5 -- # export PATH 00:35:22.883 04:32:37 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:22.883 04:32:37 -- nvmf/common.sh@46 -- # : 0 00:35:22.883 04:32:37 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:35:22.883 04:32:37 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:35:22.883 04:32:37 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:35:22.883 04:32:37 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:35:22.883 04:32:37 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:35:22.883 04:32:37 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:35:22.883 04:32:37 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:35:22.883 04:32:37 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:35:22.883 04:32:37 -- target/identify_passthru.sh@10 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/common.sh 00:35:22.883 04:32:37 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:22.883 04:32:37 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:22.883 04:32:37 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:22.883 04:32:37 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:22.883 04:32:37 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:22.883 04:32:37 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:22.883 04:32:37 -- paths/export.sh@5 -- # export PATH 00:35:22.883 04:32:37 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:22.883 04:32:37 -- target/identify_passthru.sh@12 -- # nvmftestinit 00:35:22.883 04:32:37 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:35:22.883 04:32:37 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:35:22.883 04:32:37 -- nvmf/common.sh@436 -- # prepare_net_devs 00:35:22.883 04:32:37 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:35:22.883 04:32:37 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:35:22.883 04:32:37 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:22.883 04:32:37 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:35:22.883 04:32:37 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:22.883 04:32:37 -- nvmf/common.sh@402 -- # [[ phy-fallback != virt ]] 00:35:22.883 04:32:37 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:35:22.883 04:32:37 -- nvmf/common.sh@284 -- # xtrace_disable 00:35:22.883 04:32:37 -- common/autotest_common.sh@10 -- # set +x 00:35:28.164 04:32:42 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:35:28.164 04:32:42 -- nvmf/common.sh@290 -- # pci_devs=() 00:35:28.164 04:32:42 -- nvmf/common.sh@290 -- # local -a pci_devs 00:35:28.164 04:32:42 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:35:28.164 04:32:42 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:35:28.164 04:32:42 -- nvmf/common.sh@292 -- # pci_drivers=() 00:35:28.164 04:32:42 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:35:28.164 04:32:42 -- nvmf/common.sh@294 -- # net_devs=() 00:35:28.164 04:32:42 -- nvmf/common.sh@294 -- # local -ga net_devs 00:35:28.164 04:32:42 -- nvmf/common.sh@295 -- # e810=() 00:35:28.164 04:32:42 -- nvmf/common.sh@295 -- # local -ga e810 00:35:28.164 04:32:42 -- nvmf/common.sh@296 -- # x722=() 00:35:28.164 04:32:42 -- nvmf/common.sh@296 -- # local -ga x722 00:35:28.164 04:32:42 -- nvmf/common.sh@297 -- # mlx=() 00:35:28.164 04:32:42 -- nvmf/common.sh@297 -- # local -ga mlx 00:35:28.164 04:32:42 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:35:28.164 04:32:42 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:35:28.164 04:32:42 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:35:28.164 04:32:42 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:35:28.164 04:32:42 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:35:28.164 04:32:42 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:35:28.164 04:32:42 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:35:28.164 04:32:42 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:35:28.164 04:32:42 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:35:28.164 04:32:42 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:35:28.164 04:32:42 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:35:28.164 04:32:42 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:35:28.164 04:32:42 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:35:28.164 04:32:42 -- nvmf/common.sh@326 -- # [[ '' == mlx5 ]] 00:35:28.164 04:32:42 -- nvmf/common.sh@328 -- # [[ '' == e810 ]] 00:35:28.164 04:32:42 -- nvmf/common.sh@330 -- # [[ '' == x722 ]] 00:35:28.164 04:32:42 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:35:28.164 04:32:42 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:35:28.164 04:32:42 -- nvmf/common.sh@340 -- # echo 'Found 0000:27:00.0 (0x8086 - 0x159b)' 00:35:28.164 Found 0000:27:00.0 (0x8086 - 0x159b) 00:35:28.164 04:32:42 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:35:28.164 04:32:42 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:35:28.164 04:32:42 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:28.165 04:32:42 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:28.165 04:32:42 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:35:28.165 04:32:42 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:35:28.165 04:32:42 -- nvmf/common.sh@340 -- # echo 'Found 0000:27:00.1 (0x8086 - 0x159b)' 00:35:28.165 Found 0000:27:00.1 (0x8086 - 0x159b) 00:35:28.165 04:32:42 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:35:28.165 04:32:42 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:35:28.165 04:32:42 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:28.165 04:32:42 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:28.165 04:32:42 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:35:28.165 04:32:42 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:35:28.165 04:32:42 -- nvmf/common.sh@371 -- # [[ '' == e810 ]] 00:35:28.165 04:32:42 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:35:28.165 04:32:42 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:28.165 04:32:42 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:35:28.165 04:32:42 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:28.165 04:32:42 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:27:00.0: cvl_0_0' 00:35:28.165 Found net devices under 0000:27:00.0: cvl_0_0 00:35:28.165 04:32:42 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:35:28.165 04:32:42 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:35:28.165 04:32:42 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:28.165 04:32:42 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:35:28.165 04:32:42 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:28.165 04:32:42 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:27:00.1: cvl_0_1' 00:35:28.165 Found net devices under 0000:27:00.1: cvl_0_1 00:35:28.165 04:32:42 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:35:28.165 04:32:42 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:35:28.165 04:32:42 -- nvmf/common.sh@402 -- # is_hw=yes 00:35:28.165 04:32:42 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:35:28.165 04:32:42 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:35:28.165 04:32:42 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:35:28.165 04:32:42 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:35:28.165 04:32:42 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:35:28.165 04:32:42 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:35:28.165 04:32:42 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:35:28.165 04:32:42 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:35:28.165 04:32:42 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:35:28.165 04:32:42 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:35:28.165 04:32:42 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:35:28.165 04:32:42 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:35:28.165 04:32:42 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:35:28.165 04:32:42 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:35:28.165 04:32:42 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:35:28.165 04:32:42 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:35:28.165 04:32:42 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:35:28.165 04:32:42 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:35:28.165 04:32:42 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:35:28.165 04:32:42 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:35:28.165 04:32:42 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:35:28.165 04:32:42 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:35:28.165 04:32:42 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:35:28.165 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:35:28.165 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.539 ms 00:35:28.165 00:35:28.165 --- 10.0.0.2 ping statistics --- 00:35:28.165 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:28.165 rtt min/avg/max/mdev = 0.539/0.539/0.539/0.000 ms 00:35:28.165 04:32:42 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:35:28.165 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:35:28.165 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.384 ms 00:35:28.165 00:35:28.165 --- 10.0.0.1 ping statistics --- 00:35:28.165 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:28.165 rtt min/avg/max/mdev = 0.384/0.384/0.384/0.000 ms 00:35:28.165 04:32:42 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:35:28.165 04:32:42 -- nvmf/common.sh@410 -- # return 0 00:35:28.165 04:32:42 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:35:28.165 04:32:42 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:35:28.165 04:32:42 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:35:28.165 04:32:42 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:35:28.165 04:32:42 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:35:28.165 04:32:42 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:35:28.165 04:32:42 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:35:28.165 04:32:42 -- target/identify_passthru.sh@14 -- # timing_enter nvme_identify 00:35:28.165 04:32:42 -- common/autotest_common.sh@712 -- # xtrace_disable 00:35:28.165 04:32:42 -- common/autotest_common.sh@10 -- # set +x 00:35:28.165 04:32:42 -- target/identify_passthru.sh@16 -- # get_first_nvme_bdf 00:35:28.165 04:32:42 -- common/autotest_common.sh@1509 -- # bdfs=() 00:35:28.165 04:32:42 -- common/autotest_common.sh@1509 -- # local bdfs 00:35:28.165 04:32:42 -- common/autotest_common.sh@1510 -- # bdfs=($(get_nvme_bdfs)) 00:35:28.165 04:32:42 -- common/autotest_common.sh@1510 -- # get_nvme_bdfs 00:35:28.165 04:32:42 -- common/autotest_common.sh@1498 -- # bdfs=() 00:35:28.165 04:32:42 -- common/autotest_common.sh@1498 -- # local bdfs 00:35:28.165 04:32:42 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:35:28.165 04:32:42 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:35:28.165 04:32:42 -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/gen_nvme.sh 00:35:28.426 04:32:42 -- common/autotest_common.sh@1500 -- # (( 2 == 0 )) 00:35:28.427 04:32:42 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:c9:00.0 0000:ca:00.0 00:35:28.427 04:32:42 -- common/autotest_common.sh@1512 -- # echo 0000:c9:00.0 00:35:28.427 04:32:42 -- target/identify_passthru.sh@16 -- # bdf=0000:c9:00.0 00:35:28.427 04:32:42 -- target/identify_passthru.sh@17 -- # '[' -z 0000:c9:00.0 ']' 00:35:28.427 04:32:42 -- target/identify_passthru.sh@23 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:c9:00.0' -i 0 00:35:28.427 04:32:42 -- target/identify_passthru.sh@23 -- # grep 'Serial Number:' 00:35:28.427 04:32:42 -- target/identify_passthru.sh@23 -- # awk '{print $3}' 00:35:28.427 EAL: No free 2048 kB hugepages reported on node 1 00:35:33.697 04:32:48 -- target/identify_passthru.sh@23 -- # nvme_serial_number=PHLJ9413009R2P0BGN 00:35:33.697 04:32:48 -- target/identify_passthru.sh@24 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:c9:00.0' -i 0 00:35:33.697 04:32:48 -- target/identify_passthru.sh@24 -- # grep 'Model Number:' 00:35:33.697 04:32:48 -- target/identify_passthru.sh@24 -- # awk '{print $3}' 00:35:33.697 EAL: No free 2048 kB hugepages reported on node 1 00:35:38.978 04:32:53 -- target/identify_passthru.sh@24 -- # nvme_model_number=INTEL 00:35:38.978 04:32:53 -- target/identify_passthru.sh@26 -- # timing_exit nvme_identify 00:35:38.978 04:32:53 -- common/autotest_common.sh@718 -- # xtrace_disable 00:35:38.978 04:32:53 -- common/autotest_common.sh@10 -- # set +x 00:35:38.978 04:32:53 -- target/identify_passthru.sh@28 -- # timing_enter start_nvmf_tgt 00:35:38.978 04:32:53 -- common/autotest_common.sh@712 -- # xtrace_disable 00:35:38.978 04:32:53 -- common/autotest_common.sh@10 -- # set +x 00:35:38.978 04:32:53 -- target/identify_passthru.sh@31 -- # nvmfpid=71796 00:35:38.978 04:32:53 -- target/identify_passthru.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:35:38.978 04:32:53 -- target/identify_passthru.sh@35 -- # waitforlisten 71796 00:35:38.978 04:32:53 -- target/identify_passthru.sh@30 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:35:38.978 04:32:53 -- common/autotest_common.sh@819 -- # '[' -z 71796 ']' 00:35:38.978 04:32:53 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:38.978 04:32:53 -- common/autotest_common.sh@824 -- # local max_retries=100 00:35:38.978 04:32:53 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:38.978 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:38.978 04:32:53 -- common/autotest_common.sh@828 -- # xtrace_disable 00:35:38.978 04:32:53 -- common/autotest_common.sh@10 -- # set +x 00:35:38.978 [2024-05-14 04:32:53.345631] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:35:38.978 [2024-05-14 04:32:53.345739] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:35:38.978 EAL: No free 2048 kB hugepages reported on node 1 00:35:38.978 [2024-05-14 04:32:53.466168] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:35:38.978 [2024-05-14 04:32:53.559903] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:35:38.978 [2024-05-14 04:32:53.560065] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:35:38.978 [2024-05-14 04:32:53.560077] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:35:38.978 [2024-05-14 04:32:53.560086] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:35:38.978 [2024-05-14 04:32:53.560242] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:35:38.978 [2024-05-14 04:32:53.560263] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:35:38.978 [2024-05-14 04:32:53.560368] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:35:38.978 [2024-05-14 04:32:53.560377] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:35:39.544 04:32:54 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:35:39.544 04:32:54 -- common/autotest_common.sh@852 -- # return 0 00:35:39.544 04:32:54 -- target/identify_passthru.sh@36 -- # rpc_cmd -v nvmf_set_config --passthru-identify-ctrlr 00:35:39.544 04:32:54 -- common/autotest_common.sh@551 -- # xtrace_disable 00:35:39.544 04:32:54 -- common/autotest_common.sh@10 -- # set +x 00:35:39.544 INFO: Log level set to 20 00:35:39.544 INFO: Requests: 00:35:39.544 { 00:35:39.544 "jsonrpc": "2.0", 00:35:39.544 "method": "nvmf_set_config", 00:35:39.544 "id": 1, 00:35:39.544 "params": { 00:35:39.544 "admin_cmd_passthru": { 00:35:39.544 "identify_ctrlr": true 00:35:39.544 } 00:35:39.544 } 00:35:39.544 } 00:35:39.544 00:35:39.544 INFO: response: 00:35:39.544 { 00:35:39.544 "jsonrpc": "2.0", 00:35:39.544 "id": 1, 00:35:39.544 "result": true 00:35:39.544 } 00:35:39.544 00:35:39.544 04:32:54 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:35:39.544 04:32:54 -- target/identify_passthru.sh@37 -- # rpc_cmd -v framework_start_init 00:35:39.544 04:32:54 -- common/autotest_common.sh@551 -- # xtrace_disable 00:35:39.544 04:32:54 -- common/autotest_common.sh@10 -- # set +x 00:35:39.544 INFO: Setting log level to 20 00:35:39.544 INFO: Setting log level to 20 00:35:39.544 INFO: Log level set to 20 00:35:39.544 INFO: Log level set to 20 00:35:39.544 INFO: Requests: 00:35:39.544 { 00:35:39.544 "jsonrpc": "2.0", 00:35:39.544 "method": "framework_start_init", 00:35:39.544 "id": 1 00:35:39.544 } 00:35:39.544 00:35:39.544 INFO: Requests: 00:35:39.544 { 00:35:39.544 "jsonrpc": "2.0", 00:35:39.544 "method": "framework_start_init", 00:35:39.544 "id": 1 00:35:39.544 } 00:35:39.544 00:35:39.803 [2024-05-14 04:32:54.221467] nvmf_tgt.c: 423:nvmf_tgt_advance_state: *NOTICE*: Custom identify ctrlr handler enabled 00:35:39.803 INFO: response: 00:35:39.803 { 00:35:39.803 "jsonrpc": "2.0", 00:35:39.803 "id": 1, 00:35:39.803 "result": true 00:35:39.803 } 00:35:39.803 00:35:39.803 INFO: response: 00:35:39.803 { 00:35:39.803 "jsonrpc": "2.0", 00:35:39.803 "id": 1, 00:35:39.803 "result": true 00:35:39.803 } 00:35:39.803 00:35:39.803 04:32:54 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:35:39.803 04:32:54 -- target/identify_passthru.sh@38 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:35:39.803 04:32:54 -- common/autotest_common.sh@551 -- # xtrace_disable 00:35:39.803 04:32:54 -- common/autotest_common.sh@10 -- # set +x 00:35:39.803 INFO: Setting log level to 40 00:35:39.803 INFO: Setting log level to 40 00:35:39.803 INFO: Setting log level to 40 00:35:39.803 [2024-05-14 04:32:54.231803] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:39.803 04:32:54 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:35:39.803 04:32:54 -- target/identify_passthru.sh@39 -- # timing_exit start_nvmf_tgt 00:35:39.803 04:32:54 -- common/autotest_common.sh@718 -- # xtrace_disable 00:35:39.803 04:32:54 -- common/autotest_common.sh@10 -- # set +x 00:35:39.803 04:32:54 -- target/identify_passthru.sh@41 -- # rpc_cmd bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:c9:00.0 00:35:39.803 04:32:54 -- common/autotest_common.sh@551 -- # xtrace_disable 00:35:39.803 04:32:54 -- common/autotest_common.sh@10 -- # set +x 00:35:43.143 Nvme0n1 00:35:43.143 04:32:57 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:35:43.143 04:32:57 -- target/identify_passthru.sh@42 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 1 00:35:43.143 04:32:57 -- common/autotest_common.sh@551 -- # xtrace_disable 00:35:43.143 04:32:57 -- common/autotest_common.sh@10 -- # set +x 00:35:43.143 04:32:57 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:35:43.143 04:32:57 -- target/identify_passthru.sh@43 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:35:43.143 04:32:57 -- common/autotest_common.sh@551 -- # xtrace_disable 00:35:43.143 04:32:57 -- common/autotest_common.sh@10 -- # set +x 00:35:43.143 04:32:57 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:35:43.143 04:32:57 -- target/identify_passthru.sh@44 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:35:43.143 04:32:57 -- common/autotest_common.sh@551 -- # xtrace_disable 00:35:43.143 04:32:57 -- common/autotest_common.sh@10 -- # set +x 00:35:43.143 [2024-05-14 04:32:57.140435] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:43.143 04:32:57 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:35:43.143 04:32:57 -- target/identify_passthru.sh@46 -- # rpc_cmd nvmf_get_subsystems 00:35:43.143 04:32:57 -- common/autotest_common.sh@551 -- # xtrace_disable 00:35:43.143 04:32:57 -- common/autotest_common.sh@10 -- # set +x 00:35:43.143 [2024-05-14 04:32:57.148115] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:35:43.143 [ 00:35:43.143 { 00:35:43.143 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:35:43.143 "subtype": "Discovery", 00:35:43.143 "listen_addresses": [], 00:35:43.143 "allow_any_host": true, 00:35:43.143 "hosts": [] 00:35:43.143 }, 00:35:43.143 { 00:35:43.143 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:35:43.143 "subtype": "NVMe", 00:35:43.143 "listen_addresses": [ 00:35:43.143 { 00:35:43.143 "transport": "TCP", 00:35:43.143 "trtype": "TCP", 00:35:43.143 "adrfam": "IPv4", 00:35:43.143 "traddr": "10.0.0.2", 00:35:43.143 "trsvcid": "4420" 00:35:43.143 } 00:35:43.143 ], 00:35:43.143 "allow_any_host": true, 00:35:43.143 "hosts": [], 00:35:43.143 "serial_number": "SPDK00000000000001", 00:35:43.143 "model_number": "SPDK bdev Controller", 00:35:43.143 "max_namespaces": 1, 00:35:43.143 "min_cntlid": 1, 00:35:43.143 "max_cntlid": 65519, 00:35:43.143 "namespaces": [ 00:35:43.143 { 00:35:43.143 "nsid": 1, 00:35:43.143 "bdev_name": "Nvme0n1", 00:35:43.143 "name": "Nvme0n1", 00:35:43.143 "nguid": "2BC824E8A41C4011B4B762C2A7EB24AA", 00:35:43.143 "uuid": "2bc824e8-a41c-4011-b4b7-62c2a7eb24aa" 00:35:43.143 } 00:35:43.143 ] 00:35:43.143 } 00:35:43.143 ] 00:35:43.143 04:32:57 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:35:43.143 04:32:57 -- target/identify_passthru.sh@54 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:35:43.143 04:32:57 -- target/identify_passthru.sh@54 -- # grep 'Serial Number:' 00:35:43.143 04:32:57 -- target/identify_passthru.sh@54 -- # awk '{print $3}' 00:35:43.143 EAL: No free 2048 kB hugepages reported on node 1 00:35:43.143 04:32:57 -- target/identify_passthru.sh@54 -- # nvmf_serial_number=PHLJ9413009R2P0BGN 00:35:43.143 04:32:57 -- target/identify_passthru.sh@61 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:35:43.143 04:32:57 -- target/identify_passthru.sh@61 -- # grep 'Model Number:' 00:35:43.143 04:32:57 -- target/identify_passthru.sh@61 -- # awk '{print $3}' 00:35:43.143 EAL: No free 2048 kB hugepages reported on node 1 00:35:43.402 04:32:57 -- target/identify_passthru.sh@61 -- # nvmf_model_number=INTEL 00:35:43.402 04:32:57 -- target/identify_passthru.sh@63 -- # '[' PHLJ9413009R2P0BGN '!=' PHLJ9413009R2P0BGN ']' 00:35:43.402 04:32:57 -- target/identify_passthru.sh@68 -- # '[' INTEL '!=' INTEL ']' 00:35:43.402 04:32:57 -- target/identify_passthru.sh@73 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:35:43.402 04:32:57 -- common/autotest_common.sh@551 -- # xtrace_disable 00:35:43.402 04:32:57 -- common/autotest_common.sh@10 -- # set +x 00:35:43.402 04:32:57 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:35:43.402 04:32:57 -- target/identify_passthru.sh@75 -- # trap - SIGINT SIGTERM EXIT 00:35:43.402 04:32:57 -- target/identify_passthru.sh@77 -- # nvmftestfini 00:35:43.402 04:32:57 -- nvmf/common.sh@476 -- # nvmfcleanup 00:35:43.402 04:32:57 -- nvmf/common.sh@116 -- # sync 00:35:43.402 04:32:57 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:35:43.402 04:32:57 -- nvmf/common.sh@119 -- # set +e 00:35:43.402 04:32:57 -- nvmf/common.sh@120 -- # for i in {1..20} 00:35:43.402 04:32:57 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:35:43.402 rmmod nvme_tcp 00:35:43.402 rmmod nvme_fabrics 00:35:43.402 rmmod nvme_keyring 00:35:43.402 04:32:57 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:35:43.402 04:32:57 -- nvmf/common.sh@123 -- # set -e 00:35:43.402 04:32:57 -- nvmf/common.sh@124 -- # return 0 00:35:43.402 04:32:57 -- nvmf/common.sh@477 -- # '[' -n 71796 ']' 00:35:43.402 04:32:57 -- nvmf/common.sh@478 -- # killprocess 71796 00:35:43.402 04:32:57 -- common/autotest_common.sh@926 -- # '[' -z 71796 ']' 00:35:43.402 04:32:57 -- common/autotest_common.sh@930 -- # kill -0 71796 00:35:43.402 04:32:57 -- common/autotest_common.sh@931 -- # uname 00:35:43.402 04:32:57 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:35:43.402 04:32:57 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 71796 00:35:43.402 04:32:57 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:35:43.402 04:32:57 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:35:43.402 04:32:57 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 71796' 00:35:43.402 killing process with pid 71796 00:35:43.402 04:32:57 -- common/autotest_common.sh@945 -- # kill 71796 00:35:43.402 [2024-05-14 04:32:57.859739] app.c: 883:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:35:43.402 04:32:57 -- common/autotest_common.sh@950 -- # wait 71796 00:35:46.692 04:33:00 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:35:46.692 04:33:00 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:35:46.692 04:33:00 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:35:46.692 04:33:00 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:35:46.692 04:33:00 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:35:46.692 04:33:00 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:46.692 04:33:00 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:35:46.692 04:33:00 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:48.076 04:33:02 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:35:48.076 00:35:48.076 real 0m25.309s 00:35:48.076 user 0m36.917s 00:35:48.076 sys 0m4.944s 00:35:48.076 04:33:02 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:35:48.076 04:33:02 -- common/autotest_common.sh@10 -- # set +x 00:35:48.076 ************************************ 00:35:48.076 END TEST nvmf_identify_passthru 00:35:48.076 ************************************ 00:35:48.076 04:33:02 -- spdk/autotest.sh@300 -- # run_test nvmf_dif /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/dif.sh 00:35:48.076 04:33:02 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:35:48.076 04:33:02 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:35:48.076 04:33:02 -- common/autotest_common.sh@10 -- # set +x 00:35:48.076 ************************************ 00:35:48.076 START TEST nvmf_dif 00:35:48.076 ************************************ 00:35:48.076 04:33:02 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/dif.sh 00:35:48.335 * Looking for test storage... 00:35:48.335 * Found test storage at /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target 00:35:48.335 04:33:02 -- target/dif.sh@13 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/common.sh 00:35:48.335 04:33:02 -- nvmf/common.sh@7 -- # uname -s 00:35:48.335 04:33:02 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:35:48.335 04:33:02 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:35:48.335 04:33:02 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:35:48.335 04:33:02 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:35:48.335 04:33:02 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:35:48.335 04:33:02 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:35:48.335 04:33:02 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:35:48.335 04:33:02 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:35:48.335 04:33:02 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:35:48.335 04:33:02 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:35:48.335 04:33:02 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80ef6226-405e-ee11-906e-a4bf01973fda 00:35:48.335 04:33:02 -- nvmf/common.sh@18 -- # NVME_HOSTID=80ef6226-405e-ee11-906e-a4bf01973fda 00:35:48.335 04:33:02 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:35:48.335 04:33:02 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:35:48.335 04:33:02 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:35:48.335 04:33:02 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/common.sh 00:35:48.335 04:33:02 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:48.335 04:33:02 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:48.335 04:33:02 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:48.335 04:33:02 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:48.335 04:33:02 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:48.335 04:33:02 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:48.335 04:33:02 -- paths/export.sh@5 -- # export PATH 00:35:48.335 04:33:02 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:48.335 04:33:02 -- nvmf/common.sh@46 -- # : 0 00:35:48.335 04:33:02 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:35:48.335 04:33:02 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:35:48.335 04:33:02 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:35:48.335 04:33:02 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:35:48.335 04:33:02 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:35:48.335 04:33:02 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:35:48.335 04:33:02 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:35:48.335 04:33:02 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:35:48.335 04:33:02 -- target/dif.sh@15 -- # NULL_META=16 00:35:48.335 04:33:02 -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:35:48.335 04:33:02 -- target/dif.sh@15 -- # NULL_SIZE=64 00:35:48.335 04:33:02 -- target/dif.sh@15 -- # NULL_DIF=1 00:35:48.335 04:33:02 -- target/dif.sh@135 -- # nvmftestinit 00:35:48.335 04:33:02 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:35:48.335 04:33:02 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:35:48.335 04:33:02 -- nvmf/common.sh@436 -- # prepare_net_devs 00:35:48.335 04:33:02 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:35:48.335 04:33:02 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:35:48.335 04:33:02 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:48.335 04:33:02 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:35:48.335 04:33:02 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:48.335 04:33:02 -- nvmf/common.sh@402 -- # [[ phy-fallback != virt ]] 00:35:48.335 04:33:02 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:35:48.335 04:33:02 -- nvmf/common.sh@284 -- # xtrace_disable 00:35:48.335 04:33:02 -- common/autotest_common.sh@10 -- # set +x 00:35:54.905 04:33:08 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:35:54.905 04:33:08 -- nvmf/common.sh@290 -- # pci_devs=() 00:35:54.905 04:33:08 -- nvmf/common.sh@290 -- # local -a pci_devs 00:35:54.905 04:33:08 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:35:54.905 04:33:08 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:35:54.905 04:33:08 -- nvmf/common.sh@292 -- # pci_drivers=() 00:35:54.905 04:33:08 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:35:54.905 04:33:08 -- nvmf/common.sh@294 -- # net_devs=() 00:35:54.905 04:33:08 -- nvmf/common.sh@294 -- # local -ga net_devs 00:35:54.905 04:33:08 -- nvmf/common.sh@295 -- # e810=() 00:35:54.905 04:33:08 -- nvmf/common.sh@295 -- # local -ga e810 00:35:54.905 04:33:08 -- nvmf/common.sh@296 -- # x722=() 00:35:54.905 04:33:08 -- nvmf/common.sh@296 -- # local -ga x722 00:35:54.905 04:33:08 -- nvmf/common.sh@297 -- # mlx=() 00:35:54.905 04:33:08 -- nvmf/common.sh@297 -- # local -ga mlx 00:35:54.905 04:33:08 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:35:54.905 04:33:08 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:35:54.905 04:33:08 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:35:54.905 04:33:08 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:35:54.905 04:33:08 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:35:54.905 04:33:08 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:35:54.905 04:33:08 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:35:54.905 04:33:08 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:35:54.905 04:33:08 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:35:54.905 04:33:08 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:35:54.905 04:33:08 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:35:54.905 04:33:08 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:35:54.905 04:33:08 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:35:54.905 04:33:08 -- nvmf/common.sh@326 -- # [[ '' == mlx5 ]] 00:35:54.905 04:33:08 -- nvmf/common.sh@328 -- # [[ '' == e810 ]] 00:35:54.905 04:33:08 -- nvmf/common.sh@330 -- # [[ '' == x722 ]] 00:35:54.905 04:33:08 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:35:54.905 04:33:08 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:35:54.905 04:33:08 -- nvmf/common.sh@340 -- # echo 'Found 0000:27:00.0 (0x8086 - 0x159b)' 00:35:54.905 Found 0000:27:00.0 (0x8086 - 0x159b) 00:35:54.905 04:33:08 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:35:54.905 04:33:08 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:35:54.905 04:33:08 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:54.905 04:33:08 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:54.905 04:33:08 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:35:54.905 04:33:08 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:35:54.905 04:33:08 -- nvmf/common.sh@340 -- # echo 'Found 0000:27:00.1 (0x8086 - 0x159b)' 00:35:54.905 Found 0000:27:00.1 (0x8086 - 0x159b) 00:35:54.905 04:33:08 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:35:54.905 04:33:08 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:35:54.905 04:33:08 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:54.905 04:33:08 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:54.905 04:33:08 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:35:54.905 04:33:08 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:35:54.905 04:33:08 -- nvmf/common.sh@371 -- # [[ '' == e810 ]] 00:35:54.905 04:33:08 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:35:54.905 04:33:08 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:54.905 04:33:08 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:35:54.905 04:33:08 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:54.905 04:33:08 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:27:00.0: cvl_0_0' 00:35:54.905 Found net devices under 0000:27:00.0: cvl_0_0 00:35:54.905 04:33:08 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:35:54.905 04:33:08 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:35:54.905 04:33:08 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:54.905 04:33:08 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:35:54.905 04:33:08 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:54.905 04:33:08 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:27:00.1: cvl_0_1' 00:35:54.905 Found net devices under 0000:27:00.1: cvl_0_1 00:35:54.905 04:33:08 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:35:54.905 04:33:08 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:35:54.905 04:33:08 -- nvmf/common.sh@402 -- # is_hw=yes 00:35:54.905 04:33:08 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:35:54.905 04:33:08 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:35:54.905 04:33:08 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:35:54.905 04:33:08 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:35:54.905 04:33:08 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:35:54.905 04:33:08 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:35:54.905 04:33:08 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:35:54.905 04:33:08 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:35:54.905 04:33:08 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:35:54.905 04:33:08 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:35:54.905 04:33:08 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:35:54.905 04:33:08 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:35:54.905 04:33:08 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:35:54.905 04:33:08 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:35:54.905 04:33:08 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:35:54.905 04:33:08 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:35:54.905 04:33:08 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:35:54.905 04:33:08 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:35:54.905 04:33:08 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:35:54.905 04:33:08 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:35:54.905 04:33:08 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:35:54.905 04:33:08 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:35:54.905 04:33:08 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:35:54.905 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:35:54.905 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.193 ms 00:35:54.905 00:35:54.905 --- 10.0.0.2 ping statistics --- 00:35:54.905 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:54.905 rtt min/avg/max/mdev = 0.193/0.193/0.193/0.000 ms 00:35:54.905 04:33:08 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:35:54.905 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:35:54.905 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.074 ms 00:35:54.905 00:35:54.905 --- 10.0.0.1 ping statistics --- 00:35:54.905 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:54.905 rtt min/avg/max/mdev = 0.074/0.074/0.074/0.000 ms 00:35:54.905 04:33:08 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:35:54.905 04:33:08 -- nvmf/common.sh@410 -- # return 0 00:35:54.905 04:33:08 -- nvmf/common.sh@438 -- # '[' iso == iso ']' 00:35:54.905 04:33:08 -- nvmf/common.sh@439 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/setup.sh 00:35:56.812 0000:74:02.0 (8086 0cfe): Already using the vfio-pci driver 00:35:56.812 0000:c9:00.0 (8086 0a54): Already using the vfio-pci driver 00:35:56.812 0000:f1:02.0 (8086 0cfe): Already using the vfio-pci driver 00:35:56.812 0000:79:02.0 (8086 0cfe): Already using the vfio-pci driver 00:35:56.812 0000:6f:01.0 (8086 0b25): Already using the vfio-pci driver 00:35:56.812 0000:6f:02.0 (8086 0cfe): Already using the vfio-pci driver 00:35:56.812 0000:f6:01.0 (8086 0b25): Already using the vfio-pci driver 00:35:56.812 0000:f6:02.0 (8086 0cfe): Already using the vfio-pci driver 00:35:56.812 0000:74:01.0 (8086 0b25): Already using the vfio-pci driver 00:35:56.812 0000:6a:02.0 (8086 0cfe): Already using the vfio-pci driver 00:35:56.812 0000:79:01.0 (8086 0b25): Already using the vfio-pci driver 00:35:56.812 0000:ec:01.0 (8086 0b25): Already using the vfio-pci driver 00:35:56.812 0000:6a:01.0 (8086 0b25): Already using the vfio-pci driver 00:35:56.812 0000:ec:02.0 (8086 0cfe): Already using the vfio-pci driver 00:35:56.812 0000:ca:00.0 (8086 0a54): Already using the vfio-pci driver 00:35:56.812 0000:e7:01.0 (8086 0b25): Already using the vfio-pci driver 00:35:56.812 0000:e7:02.0 (8086 0cfe): Already using the vfio-pci driver 00:35:56.812 0000:f1:01.0 (8086 0b25): Already using the vfio-pci driver 00:35:56.812 04:33:11 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:35:56.812 04:33:11 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:35:56.812 04:33:11 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:35:56.812 04:33:11 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:35:56.812 04:33:11 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:35:56.812 04:33:11 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:35:56.812 04:33:11 -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:35:56.812 04:33:11 -- target/dif.sh@137 -- # nvmfappstart 00:35:56.812 04:33:11 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:35:56.812 04:33:11 -- common/autotest_common.sh@712 -- # xtrace_disable 00:35:56.812 04:33:11 -- common/autotest_common.sh@10 -- # set +x 00:35:56.812 04:33:11 -- nvmf/common.sh@469 -- # nvmfpid=78961 00:35:56.812 04:33:11 -- nvmf/common.sh@470 -- # waitforlisten 78961 00:35:56.812 04:33:11 -- common/autotest_common.sh@819 -- # '[' -z 78961 ']' 00:35:56.812 04:33:11 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:56.812 04:33:11 -- common/autotest_common.sh@824 -- # local max_retries=100 00:35:56.812 04:33:11 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:56.812 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:56.812 04:33:11 -- common/autotest_common.sh@828 -- # xtrace_disable 00:35:56.812 04:33:11 -- common/autotest_common.sh@10 -- # set +x 00:35:56.812 04:33:11 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:35:56.812 [2024-05-14 04:33:11.295894] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:35:56.812 [2024-05-14 04:33:11.296022] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:35:56.812 EAL: No free 2048 kB hugepages reported on node 1 00:35:57.072 [2024-05-14 04:33:11.433523] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:57.072 [2024-05-14 04:33:11.526583] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:35:57.072 [2024-05-14 04:33:11.526776] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:35:57.072 [2024-05-14 04:33:11.526791] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:35:57.072 [2024-05-14 04:33:11.526801] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:35:57.072 [2024-05-14 04:33:11.526831] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:35:57.638 04:33:11 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:35:57.638 04:33:11 -- common/autotest_common.sh@852 -- # return 0 00:35:57.638 04:33:11 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:35:57.638 04:33:11 -- common/autotest_common.sh@718 -- # xtrace_disable 00:35:57.638 04:33:11 -- common/autotest_common.sh@10 -- # set +x 00:35:57.638 04:33:12 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:35:57.638 04:33:12 -- target/dif.sh@139 -- # create_transport 00:35:57.638 04:33:12 -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:35:57.638 04:33:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:35:57.638 04:33:12 -- common/autotest_common.sh@10 -- # set +x 00:35:57.638 [2024-05-14 04:33:12.007435] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:57.638 04:33:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:35:57.638 04:33:12 -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:35:57.638 04:33:12 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:35:57.638 04:33:12 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:35:57.638 04:33:12 -- common/autotest_common.sh@10 -- # set +x 00:35:57.638 ************************************ 00:35:57.638 START TEST fio_dif_1_default 00:35:57.638 ************************************ 00:35:57.638 04:33:12 -- common/autotest_common.sh@1104 -- # fio_dif_1 00:35:57.638 04:33:12 -- target/dif.sh@86 -- # create_subsystems 0 00:35:57.638 04:33:12 -- target/dif.sh@28 -- # local sub 00:35:57.638 04:33:12 -- target/dif.sh@30 -- # for sub in "$@" 00:35:57.638 04:33:12 -- target/dif.sh@31 -- # create_subsystem 0 00:35:57.638 04:33:12 -- target/dif.sh@18 -- # local sub_id=0 00:35:57.638 04:33:12 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:35:57.638 04:33:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:35:57.638 04:33:12 -- common/autotest_common.sh@10 -- # set +x 00:35:57.638 bdev_null0 00:35:57.639 04:33:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:35:57.639 04:33:12 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:35:57.639 04:33:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:35:57.639 04:33:12 -- common/autotest_common.sh@10 -- # set +x 00:35:57.639 04:33:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:35:57.639 04:33:12 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:35:57.639 04:33:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:35:57.639 04:33:12 -- common/autotest_common.sh@10 -- # set +x 00:35:57.639 04:33:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:35:57.639 04:33:12 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:35:57.639 04:33:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:35:57.639 04:33:12 -- common/autotest_common.sh@10 -- # set +x 00:35:57.639 [2024-05-14 04:33:12.047573] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:57.639 04:33:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:35:57.639 04:33:12 -- target/dif.sh@87 -- # fio /dev/fd/62 00:35:57.639 04:33:12 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:57.639 04:33:12 -- common/autotest_common.sh@1335 -- # fio_plugin /var/jenkins/workspace/dsa-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:57.639 04:33:12 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:35:57.639 04:33:12 -- target/dif.sh@87 -- # create_json_sub_conf 0 00:35:57.639 04:33:12 -- common/autotest_common.sh@1318 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:35:57.639 04:33:12 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:35:57.639 04:33:12 -- common/autotest_common.sh@1318 -- # local sanitizers 00:35:57.639 04:33:12 -- common/autotest_common.sh@1319 -- # local plugin=/var/jenkins/workspace/dsa-phy-autotest/spdk/build/fio/spdk_bdev 00:35:57.639 04:33:12 -- nvmf/common.sh@520 -- # config=() 00:35:57.639 04:33:12 -- common/autotest_common.sh@1320 -- # shift 00:35:57.639 04:33:12 -- nvmf/common.sh@520 -- # local subsystem config 00:35:57.639 04:33:12 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:35:57.639 04:33:12 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:35:57.639 04:33:12 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:35:57.639 04:33:12 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:35:57.639 { 00:35:57.639 "params": { 00:35:57.639 "name": "Nvme$subsystem", 00:35:57.639 "trtype": "$TEST_TRANSPORT", 00:35:57.639 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:57.639 "adrfam": "ipv4", 00:35:57.639 "trsvcid": "$NVMF_PORT", 00:35:57.639 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:57.639 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:57.639 "hdgst": ${hdgst:-false}, 00:35:57.639 "ddgst": ${ddgst:-false} 00:35:57.639 }, 00:35:57.639 "method": "bdev_nvme_attach_controller" 00:35:57.639 } 00:35:57.639 EOF 00:35:57.639 )") 00:35:57.639 04:33:12 -- target/dif.sh@82 -- # gen_fio_conf 00:35:57.639 04:33:12 -- target/dif.sh@54 -- # local file 00:35:57.639 04:33:12 -- target/dif.sh@56 -- # cat 00:35:57.639 04:33:12 -- nvmf/common.sh@542 -- # cat 00:35:57.639 04:33:12 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/dsa-phy-autotest/spdk/build/fio/spdk_bdev 00:35:57.639 04:33:12 -- common/autotest_common.sh@1324 -- # grep libasan 00:35:57.639 04:33:12 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:35:57.639 04:33:12 -- target/dif.sh@72 -- # (( file = 1 )) 00:35:57.639 04:33:12 -- target/dif.sh@72 -- # (( file <= files )) 00:35:57.639 04:33:12 -- nvmf/common.sh@544 -- # jq . 00:35:57.639 04:33:12 -- nvmf/common.sh@545 -- # IFS=, 00:35:57.639 04:33:12 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:35:57.639 "params": { 00:35:57.639 "name": "Nvme0", 00:35:57.639 "trtype": "tcp", 00:35:57.639 "traddr": "10.0.0.2", 00:35:57.639 "adrfam": "ipv4", 00:35:57.639 "trsvcid": "4420", 00:35:57.639 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:35:57.639 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:35:57.639 "hdgst": false, 00:35:57.639 "ddgst": false 00:35:57.639 }, 00:35:57.639 "method": "bdev_nvme_attach_controller" 00:35:57.639 }' 00:35:57.639 04:33:12 -- common/autotest_common.sh@1324 -- # asan_lib=/usr/lib64/libasan.so.8 00:35:57.639 04:33:12 -- common/autotest_common.sh@1325 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:35:57.639 04:33:12 -- common/autotest_common.sh@1326 -- # break 00:35:57.639 04:33:12 -- common/autotest_common.sh@1331 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /var/jenkins/workspace/dsa-phy-autotest/spdk/build/fio/spdk_bdev' 00:35:57.639 04:33:12 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:58.207 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:35:58.207 fio-3.35 00:35:58.207 Starting 1 thread 00:35:58.207 EAL: No free 2048 kB hugepages reported on node 1 00:35:58.774 [2024-05-14 04:33:13.059051] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:35:58.774 [2024-05-14 04:33:13.059130] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:36:08.736 00:36:08.736 filename0: (groupid=0, jobs=1): err= 0: pid=79425: Tue May 14 04:33:23 2024 00:36:08.736 read: IOPS=96, BW=387KiB/s (396kB/s)(3872KiB/10010msec) 00:36:08.736 slat (nsec): min=6004, max=33935, avg=7152.43, stdev=1822.53 00:36:08.736 clat (usec): min=40811, max=50269, avg=41340.40, stdev=736.75 00:36:08.736 lat (usec): min=40818, max=50303, avg=41347.55, stdev=737.13 00:36:08.736 clat percentiles (usec): 00:36:08.736 | 1.00th=[40633], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:36:08.736 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:36:08.736 | 70.00th=[41681], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:36:08.736 | 99.00th=[42206], 99.50th=[42730], 99.90th=[50070], 99.95th=[50070], 00:36:08.736 | 99.99th=[50070] 00:36:08.736 bw ( KiB/s): min= 352, max= 416, per=99.53%, avg=385.60, stdev=12.61, samples=20 00:36:08.736 iops : min= 88, max= 104, avg=96.40, stdev= 3.15, samples=20 00:36:08.736 lat (msec) : 50=99.59%, 100=0.41% 00:36:08.736 cpu : usr=96.04%, sys=3.65%, ctx=13, majf=0, minf=1637 00:36:08.736 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:08.736 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:08.736 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:08.736 issued rwts: total=968,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:08.736 latency : target=0, window=0, percentile=100.00%, depth=4 00:36:08.736 00:36:08.736 Run status group 0 (all jobs): 00:36:08.736 READ: bw=387KiB/s (396kB/s), 387KiB/s-387KiB/s (396kB/s-396kB/s), io=3872KiB (3965kB), run=10010-10010msec 00:36:09.304 ----------------------------------------------------- 00:36:09.304 Suppressions used: 00:36:09.304 count bytes template 00:36:09.304 1 8 /usr/src/fio/parse.c 00:36:09.304 1 8 libtcmalloc_minimal.so 00:36:09.304 1 904 libcrypto.so 00:36:09.304 ----------------------------------------------------- 00:36:09.304 00:36:09.304 04:33:23 -- target/dif.sh@88 -- # destroy_subsystems 0 00:36:09.304 04:33:23 -- target/dif.sh@43 -- # local sub 00:36:09.304 04:33:23 -- target/dif.sh@45 -- # for sub in "$@" 00:36:09.304 04:33:23 -- target/dif.sh@46 -- # destroy_subsystem 0 00:36:09.304 04:33:23 -- target/dif.sh@36 -- # local sub_id=0 00:36:09.304 04:33:23 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:36:09.304 04:33:23 -- common/autotest_common.sh@551 -- # xtrace_disable 00:36:09.304 04:33:23 -- common/autotest_common.sh@10 -- # set +x 00:36:09.562 04:33:23 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:36:09.562 04:33:23 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:36:09.562 04:33:23 -- common/autotest_common.sh@551 -- # xtrace_disable 00:36:09.562 04:33:23 -- common/autotest_common.sh@10 -- # set +x 00:36:09.562 04:33:23 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:36:09.562 00:36:09.562 real 0m11.890s 00:36:09.562 user 0m26.885s 00:36:09.562 sys 0m0.815s 00:36:09.562 04:33:23 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:36:09.562 04:33:23 -- common/autotest_common.sh@10 -- # set +x 00:36:09.562 ************************************ 00:36:09.562 END TEST fio_dif_1_default 00:36:09.562 ************************************ 00:36:09.562 04:33:23 -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:36:09.562 04:33:23 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:36:09.562 04:33:23 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:36:09.562 04:33:23 -- common/autotest_common.sh@10 -- # set +x 00:36:09.562 ************************************ 00:36:09.562 START TEST fio_dif_1_multi_subsystems 00:36:09.562 ************************************ 00:36:09.562 04:33:23 -- common/autotest_common.sh@1104 -- # fio_dif_1_multi_subsystems 00:36:09.562 04:33:23 -- target/dif.sh@92 -- # local files=1 00:36:09.562 04:33:23 -- target/dif.sh@94 -- # create_subsystems 0 1 00:36:09.562 04:33:23 -- target/dif.sh@28 -- # local sub 00:36:09.562 04:33:23 -- target/dif.sh@30 -- # for sub in "$@" 00:36:09.562 04:33:23 -- target/dif.sh@31 -- # create_subsystem 0 00:36:09.562 04:33:23 -- target/dif.sh@18 -- # local sub_id=0 00:36:09.562 04:33:23 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:36:09.562 04:33:23 -- common/autotest_common.sh@551 -- # xtrace_disable 00:36:09.562 04:33:23 -- common/autotest_common.sh@10 -- # set +x 00:36:09.562 bdev_null0 00:36:09.562 04:33:23 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:36:09.562 04:33:23 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:36:09.562 04:33:23 -- common/autotest_common.sh@551 -- # xtrace_disable 00:36:09.562 04:33:23 -- common/autotest_common.sh@10 -- # set +x 00:36:09.562 04:33:23 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:36:09.562 04:33:23 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:36:09.562 04:33:23 -- common/autotest_common.sh@551 -- # xtrace_disable 00:36:09.562 04:33:23 -- common/autotest_common.sh@10 -- # set +x 00:36:09.562 04:33:23 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:36:09.562 04:33:23 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:36:09.562 04:33:23 -- common/autotest_common.sh@551 -- # xtrace_disable 00:36:09.562 04:33:23 -- common/autotest_common.sh@10 -- # set +x 00:36:09.562 [2024-05-14 04:33:23.971214] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:36:09.562 04:33:23 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:36:09.562 04:33:23 -- target/dif.sh@30 -- # for sub in "$@" 00:36:09.562 04:33:23 -- target/dif.sh@31 -- # create_subsystem 1 00:36:09.562 04:33:23 -- target/dif.sh@18 -- # local sub_id=1 00:36:09.562 04:33:23 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:36:09.562 04:33:23 -- common/autotest_common.sh@551 -- # xtrace_disable 00:36:09.562 04:33:23 -- common/autotest_common.sh@10 -- # set +x 00:36:09.562 bdev_null1 00:36:09.562 04:33:23 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:36:09.562 04:33:23 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:36:09.562 04:33:23 -- common/autotest_common.sh@551 -- # xtrace_disable 00:36:09.562 04:33:23 -- common/autotest_common.sh@10 -- # set +x 00:36:09.562 04:33:23 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:36:09.562 04:33:23 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:36:09.562 04:33:23 -- common/autotest_common.sh@551 -- # xtrace_disable 00:36:09.562 04:33:23 -- common/autotest_common.sh@10 -- # set +x 00:36:09.562 04:33:23 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:36:09.562 04:33:23 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:36:09.562 04:33:23 -- common/autotest_common.sh@551 -- # xtrace_disable 00:36:09.562 04:33:23 -- common/autotest_common.sh@10 -- # set +x 00:36:09.562 04:33:24 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:36:09.562 04:33:24 -- target/dif.sh@95 -- # fio /dev/fd/62 00:36:09.562 04:33:24 -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:36:09.562 04:33:24 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:36:09.562 04:33:24 -- nvmf/common.sh@520 -- # config=() 00:36:09.562 04:33:24 -- nvmf/common.sh@520 -- # local subsystem config 00:36:09.562 04:33:24 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:36:09.562 04:33:24 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:36:09.562 04:33:24 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:36:09.562 { 00:36:09.562 "params": { 00:36:09.562 "name": "Nvme$subsystem", 00:36:09.562 "trtype": "$TEST_TRANSPORT", 00:36:09.562 "traddr": "$NVMF_FIRST_TARGET_IP", 00:36:09.562 "adrfam": "ipv4", 00:36:09.562 "trsvcid": "$NVMF_PORT", 00:36:09.562 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:36:09.562 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:36:09.562 "hdgst": ${hdgst:-false}, 00:36:09.562 "ddgst": ${ddgst:-false} 00:36:09.562 }, 00:36:09.562 "method": "bdev_nvme_attach_controller" 00:36:09.562 } 00:36:09.562 EOF 00:36:09.562 )") 00:36:09.563 04:33:24 -- common/autotest_common.sh@1335 -- # fio_plugin /var/jenkins/workspace/dsa-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:36:09.563 04:33:24 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:36:09.563 04:33:24 -- common/autotest_common.sh@1318 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:36:09.563 04:33:24 -- common/autotest_common.sh@1318 -- # local sanitizers 00:36:09.563 04:33:24 -- common/autotest_common.sh@1319 -- # local plugin=/var/jenkins/workspace/dsa-phy-autotest/spdk/build/fio/spdk_bdev 00:36:09.563 04:33:24 -- common/autotest_common.sh@1320 -- # shift 00:36:09.563 04:33:24 -- target/dif.sh@82 -- # gen_fio_conf 00:36:09.563 04:33:24 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:36:09.563 04:33:24 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:36:09.563 04:33:24 -- target/dif.sh@54 -- # local file 00:36:09.563 04:33:24 -- target/dif.sh@56 -- # cat 00:36:09.563 04:33:24 -- nvmf/common.sh@542 -- # cat 00:36:09.563 04:33:24 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/dsa-phy-autotest/spdk/build/fio/spdk_bdev 00:36:09.563 04:33:24 -- target/dif.sh@72 -- # (( file = 1 )) 00:36:09.563 04:33:24 -- target/dif.sh@72 -- # (( file <= files )) 00:36:09.563 04:33:24 -- common/autotest_common.sh@1324 -- # grep libasan 00:36:09.563 04:33:24 -- target/dif.sh@73 -- # cat 00:36:09.563 04:33:24 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:36:09.563 04:33:24 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:36:09.563 04:33:24 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:36:09.563 { 00:36:09.563 "params": { 00:36:09.563 "name": "Nvme$subsystem", 00:36:09.563 "trtype": "$TEST_TRANSPORT", 00:36:09.563 "traddr": "$NVMF_FIRST_TARGET_IP", 00:36:09.563 "adrfam": "ipv4", 00:36:09.563 "trsvcid": "$NVMF_PORT", 00:36:09.563 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:36:09.563 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:36:09.563 "hdgst": ${hdgst:-false}, 00:36:09.563 "ddgst": ${ddgst:-false} 00:36:09.563 }, 00:36:09.563 "method": "bdev_nvme_attach_controller" 00:36:09.563 } 00:36:09.563 EOF 00:36:09.563 )") 00:36:09.563 04:33:24 -- target/dif.sh@72 -- # (( file++ )) 00:36:09.563 04:33:24 -- target/dif.sh@72 -- # (( file <= files )) 00:36:09.563 04:33:24 -- nvmf/common.sh@542 -- # cat 00:36:09.563 04:33:24 -- nvmf/common.sh@544 -- # jq . 00:36:09.563 04:33:24 -- nvmf/common.sh@545 -- # IFS=, 00:36:09.563 04:33:24 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:36:09.563 "params": { 00:36:09.563 "name": "Nvme0", 00:36:09.563 "trtype": "tcp", 00:36:09.563 "traddr": "10.0.0.2", 00:36:09.563 "adrfam": "ipv4", 00:36:09.563 "trsvcid": "4420", 00:36:09.563 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:36:09.563 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:36:09.563 "hdgst": false, 00:36:09.563 "ddgst": false 00:36:09.563 }, 00:36:09.563 "method": "bdev_nvme_attach_controller" 00:36:09.563 },{ 00:36:09.563 "params": { 00:36:09.563 "name": "Nvme1", 00:36:09.563 "trtype": "tcp", 00:36:09.563 "traddr": "10.0.0.2", 00:36:09.563 "adrfam": "ipv4", 00:36:09.563 "trsvcid": "4420", 00:36:09.563 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:36:09.563 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:36:09.563 "hdgst": false, 00:36:09.563 "ddgst": false 00:36:09.563 }, 00:36:09.563 "method": "bdev_nvme_attach_controller" 00:36:09.563 }' 00:36:09.563 04:33:24 -- common/autotest_common.sh@1324 -- # asan_lib=/usr/lib64/libasan.so.8 00:36:09.563 04:33:24 -- common/autotest_common.sh@1325 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:36:09.563 04:33:24 -- common/autotest_common.sh@1326 -- # break 00:36:09.563 04:33:24 -- common/autotest_common.sh@1331 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /var/jenkins/workspace/dsa-phy-autotest/spdk/build/fio/spdk_bdev' 00:36:09.563 04:33:24 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:36:10.127 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:36:10.127 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:36:10.127 fio-3.35 00:36:10.127 Starting 2 threads 00:36:10.127 EAL: No free 2048 kB hugepages reported on node 1 00:36:10.721 [2024-05-14 04:33:25.108619] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:36:10.721 [2024-05-14 04:33:25.108685] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:36:20.689 00:36:20.689 filename0: (groupid=0, jobs=1): err= 0: pid=81998: Tue May 14 04:33:35 2024 00:36:20.689 read: IOPS=189, BW=760KiB/s (778kB/s)(7600KiB/10003msec) 00:36:20.689 slat (nsec): min=5262, max=33731, avg=6884.51, stdev=1501.84 00:36:20.689 clat (usec): min=553, max=43430, avg=21039.13, stdev=20177.52 00:36:20.689 lat (usec): min=559, max=43464, avg=21046.02, stdev=20177.15 00:36:20.689 clat percentiles (usec): 00:36:20.689 | 1.00th=[ 635], 5.00th=[ 799], 10.00th=[ 816], 20.00th=[ 824], 00:36:20.689 | 30.00th=[ 832], 40.00th=[ 840], 50.00th=[40633], 60.00th=[41157], 00:36:20.689 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:36:20.689 | 99.00th=[41681], 99.50th=[41681], 99.90th=[43254], 99.95th=[43254], 00:36:20.689 | 99.99th=[43254] 00:36:20.689 bw ( KiB/s): min= 704, max= 768, per=66.38%, avg=761.26, stdev=20.18, samples=19 00:36:20.689 iops : min= 176, max= 192, avg=190.32, stdev= 5.04, samples=19 00:36:20.689 lat (usec) : 750=2.53%, 1000=46.32% 00:36:20.689 lat (msec) : 2=1.05%, 50=50.11% 00:36:20.689 cpu : usr=98.45%, sys=1.27%, ctx=14, majf=0, minf=1632 00:36:20.689 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:20.689 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:20.689 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:20.689 issued rwts: total=1900,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:20.689 latency : target=0, window=0, percentile=100.00%, depth=4 00:36:20.689 filename1: (groupid=0, jobs=1): err= 0: pid=81999: Tue May 14 04:33:35 2024 00:36:20.689 read: IOPS=97, BW=388KiB/s (397kB/s)(3888KiB/10020msec) 00:36:20.689 slat (nsec): min=3633, max=22006, avg=7236.72, stdev=1603.38 00:36:20.689 clat (usec): min=40783, max=43105, avg=41210.35, stdev=437.22 00:36:20.689 lat (usec): min=40789, max=43127, avg=41217.59, stdev=437.29 00:36:20.689 clat percentiles (usec): 00:36:20.689 | 1.00th=[40633], 5.00th=[40633], 10.00th=[41157], 20.00th=[41157], 00:36:20.689 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:36:20.689 | 70.00th=[41157], 80.00th=[41681], 90.00th=[42206], 95.00th=[42206], 00:36:20.689 | 99.00th=[42206], 99.50th=[42206], 99.90th=[43254], 99.95th=[43254], 00:36:20.689 | 99.99th=[43254] 00:36:20.689 bw ( KiB/s): min= 384, max= 416, per=33.75%, avg=387.20, stdev= 9.85, samples=20 00:36:20.689 iops : min= 96, max= 104, avg=96.80, stdev= 2.46, samples=20 00:36:20.689 lat (msec) : 50=100.00% 00:36:20.689 cpu : usr=98.54%, sys=1.17%, ctx=19, majf=0, minf=1638 00:36:20.689 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:20.689 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:20.689 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:20.689 issued rwts: total=972,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:20.689 latency : target=0, window=0, percentile=100.00%, depth=4 00:36:20.689 00:36:20.689 Run status group 0 (all jobs): 00:36:20.689 READ: bw=1147KiB/s (1174kB/s), 388KiB/s-760KiB/s (397kB/s-778kB/s), io=11.2MiB (11.8MB), run=10003-10020msec 00:36:21.255 ----------------------------------------------------- 00:36:21.255 Suppressions used: 00:36:21.255 count bytes template 00:36:21.255 2 16 /usr/src/fio/parse.c 00:36:21.255 1 8 libtcmalloc_minimal.so 00:36:21.255 1 904 libcrypto.so 00:36:21.255 ----------------------------------------------------- 00:36:21.255 00:36:21.513 04:33:35 -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:36:21.513 04:33:35 -- target/dif.sh@43 -- # local sub 00:36:21.513 04:33:35 -- target/dif.sh@45 -- # for sub in "$@" 00:36:21.513 04:33:35 -- target/dif.sh@46 -- # destroy_subsystem 0 00:36:21.513 04:33:35 -- target/dif.sh@36 -- # local sub_id=0 00:36:21.513 04:33:35 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:36:21.513 04:33:35 -- common/autotest_common.sh@551 -- # xtrace_disable 00:36:21.513 04:33:35 -- common/autotest_common.sh@10 -- # set +x 00:36:21.513 04:33:35 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:36:21.513 04:33:35 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:36:21.513 04:33:35 -- common/autotest_common.sh@551 -- # xtrace_disable 00:36:21.513 04:33:35 -- common/autotest_common.sh@10 -- # set +x 00:36:21.513 04:33:35 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:36:21.513 04:33:35 -- target/dif.sh@45 -- # for sub in "$@" 00:36:21.513 04:33:35 -- target/dif.sh@46 -- # destroy_subsystem 1 00:36:21.513 04:33:35 -- target/dif.sh@36 -- # local sub_id=1 00:36:21.513 04:33:35 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:36:21.513 04:33:35 -- common/autotest_common.sh@551 -- # xtrace_disable 00:36:21.513 04:33:35 -- common/autotest_common.sh@10 -- # set +x 00:36:21.513 04:33:35 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:36:21.513 04:33:35 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:36:21.513 04:33:35 -- common/autotest_common.sh@551 -- # xtrace_disable 00:36:21.513 04:33:35 -- common/autotest_common.sh@10 -- # set +x 00:36:21.513 04:33:35 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:36:21.513 00:36:21.513 real 0m11.934s 00:36:21.513 user 0m34.034s 00:36:21.513 sys 0m0.669s 00:36:21.513 04:33:35 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:36:21.513 04:33:35 -- common/autotest_common.sh@10 -- # set +x 00:36:21.513 ************************************ 00:36:21.513 END TEST fio_dif_1_multi_subsystems 00:36:21.513 ************************************ 00:36:21.513 04:33:35 -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:36:21.513 04:33:35 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:36:21.513 04:33:35 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:36:21.513 04:33:35 -- common/autotest_common.sh@10 -- # set +x 00:36:21.513 ************************************ 00:36:21.513 START TEST fio_dif_rand_params 00:36:21.513 ************************************ 00:36:21.513 04:33:35 -- common/autotest_common.sh@1104 -- # fio_dif_rand_params 00:36:21.513 04:33:35 -- target/dif.sh@100 -- # local NULL_DIF 00:36:21.513 04:33:35 -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:36:21.513 04:33:35 -- target/dif.sh@103 -- # NULL_DIF=3 00:36:21.513 04:33:35 -- target/dif.sh@103 -- # bs=128k 00:36:21.513 04:33:35 -- target/dif.sh@103 -- # numjobs=3 00:36:21.513 04:33:35 -- target/dif.sh@103 -- # iodepth=3 00:36:21.513 04:33:35 -- target/dif.sh@103 -- # runtime=5 00:36:21.513 04:33:35 -- target/dif.sh@105 -- # create_subsystems 0 00:36:21.513 04:33:35 -- target/dif.sh@28 -- # local sub 00:36:21.513 04:33:35 -- target/dif.sh@30 -- # for sub in "$@" 00:36:21.513 04:33:35 -- target/dif.sh@31 -- # create_subsystem 0 00:36:21.513 04:33:35 -- target/dif.sh@18 -- # local sub_id=0 00:36:21.513 04:33:35 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:36:21.513 04:33:35 -- common/autotest_common.sh@551 -- # xtrace_disable 00:36:21.513 04:33:35 -- common/autotest_common.sh@10 -- # set +x 00:36:21.513 bdev_null0 00:36:21.513 04:33:35 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:36:21.513 04:33:35 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:36:21.513 04:33:35 -- common/autotest_common.sh@551 -- # xtrace_disable 00:36:21.513 04:33:35 -- common/autotest_common.sh@10 -- # set +x 00:36:21.513 04:33:35 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:36:21.513 04:33:35 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:36:21.513 04:33:35 -- common/autotest_common.sh@551 -- # xtrace_disable 00:36:21.513 04:33:35 -- common/autotest_common.sh@10 -- # set +x 00:36:21.513 04:33:35 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:36:21.513 04:33:35 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:36:21.513 04:33:35 -- common/autotest_common.sh@551 -- # xtrace_disable 00:36:21.513 04:33:35 -- common/autotest_common.sh@10 -- # set +x 00:36:21.513 [2024-05-14 04:33:35.940725] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:36:21.513 04:33:35 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:36:21.513 04:33:35 -- target/dif.sh@106 -- # fio /dev/fd/62 00:36:21.513 04:33:35 -- target/dif.sh@106 -- # create_json_sub_conf 0 00:36:21.513 04:33:35 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:36:21.513 04:33:35 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:36:21.513 04:33:35 -- common/autotest_common.sh@1335 -- # fio_plugin /var/jenkins/workspace/dsa-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:36:21.513 04:33:35 -- nvmf/common.sh@520 -- # config=() 00:36:21.513 04:33:35 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:36:21.513 04:33:35 -- nvmf/common.sh@520 -- # local subsystem config 00:36:21.513 04:33:35 -- common/autotest_common.sh@1318 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:36:21.513 04:33:35 -- common/autotest_common.sh@1318 -- # local sanitizers 00:36:21.513 04:33:35 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:36:21.513 04:33:35 -- target/dif.sh@82 -- # gen_fio_conf 00:36:21.513 04:33:35 -- common/autotest_common.sh@1319 -- # local plugin=/var/jenkins/workspace/dsa-phy-autotest/spdk/build/fio/spdk_bdev 00:36:21.513 04:33:35 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:36:21.513 { 00:36:21.513 "params": { 00:36:21.513 "name": "Nvme$subsystem", 00:36:21.513 "trtype": "$TEST_TRANSPORT", 00:36:21.513 "traddr": "$NVMF_FIRST_TARGET_IP", 00:36:21.513 "adrfam": "ipv4", 00:36:21.513 "trsvcid": "$NVMF_PORT", 00:36:21.513 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:36:21.513 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:36:21.513 "hdgst": ${hdgst:-false}, 00:36:21.513 "ddgst": ${ddgst:-false} 00:36:21.513 }, 00:36:21.513 "method": "bdev_nvme_attach_controller" 00:36:21.513 } 00:36:21.513 EOF 00:36:21.513 )") 00:36:21.513 04:33:35 -- common/autotest_common.sh@1320 -- # shift 00:36:21.513 04:33:35 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:36:21.513 04:33:35 -- target/dif.sh@54 -- # local file 00:36:21.513 04:33:35 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:36:21.513 04:33:35 -- target/dif.sh@56 -- # cat 00:36:21.513 04:33:35 -- nvmf/common.sh@542 -- # cat 00:36:21.513 04:33:35 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:36:21.513 04:33:35 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/dsa-phy-autotest/spdk/build/fio/spdk_bdev 00:36:21.513 04:33:35 -- common/autotest_common.sh@1324 -- # grep libasan 00:36:21.513 04:33:35 -- target/dif.sh@72 -- # (( file = 1 )) 00:36:21.513 04:33:35 -- target/dif.sh@72 -- # (( file <= files )) 00:36:21.513 04:33:35 -- nvmf/common.sh@544 -- # jq . 00:36:21.513 04:33:35 -- nvmf/common.sh@545 -- # IFS=, 00:36:21.513 04:33:35 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:36:21.513 "params": { 00:36:21.513 "name": "Nvme0", 00:36:21.513 "trtype": "tcp", 00:36:21.513 "traddr": "10.0.0.2", 00:36:21.513 "adrfam": "ipv4", 00:36:21.513 "trsvcid": "4420", 00:36:21.513 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:36:21.513 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:36:21.513 "hdgst": false, 00:36:21.513 "ddgst": false 00:36:21.513 }, 00:36:21.513 "method": "bdev_nvme_attach_controller" 00:36:21.513 }' 00:36:21.513 04:33:35 -- common/autotest_common.sh@1324 -- # asan_lib=/usr/lib64/libasan.so.8 00:36:21.513 04:33:35 -- common/autotest_common.sh@1325 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:36:21.513 04:33:35 -- common/autotest_common.sh@1326 -- # break 00:36:21.513 04:33:35 -- common/autotest_common.sh@1331 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /var/jenkins/workspace/dsa-phy-autotest/spdk/build/fio/spdk_bdev' 00:36:21.513 04:33:35 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:36:22.081 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:36:22.081 ... 00:36:22.081 fio-3.35 00:36:22.081 Starting 3 threads 00:36:22.081 EAL: No free 2048 kB hugepages reported on node 1 00:36:22.341 [2024-05-14 04:33:36.859736] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:36:22.341 [2024-05-14 04:33:36.859796] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:36:27.610 00:36:27.610 filename0: (groupid=0, jobs=1): err= 0: pid=84561: Tue May 14 04:33:42 2024 00:36:27.610 read: IOPS=281, BW=35.2MiB/s (36.9MB/s)(178MiB/5044msec) 00:36:27.610 slat (nsec): min=4145, max=19933, avg=7345.13, stdev=1345.15 00:36:27.610 clat (usec): min=3260, max=89992, avg=10616.49, stdev=13320.15 00:36:27.610 lat (usec): min=3266, max=90001, avg=10623.83, stdev=13320.31 00:36:27.610 clat percentiles (usec): 00:36:27.610 | 1.00th=[ 3752], 5.00th=[ 3982], 10.00th=[ 4146], 20.00th=[ 4490], 00:36:27.610 | 30.00th=[ 5276], 40.00th=[ 5866], 50.00th=[ 6259], 60.00th=[ 6718], 00:36:27.610 | 70.00th=[ 7701], 80.00th=[ 8979], 90.00th=[11863], 95.00th=[48497], 00:36:27.610 | 99.00th=[50594], 99.50th=[52167], 99.90th=[89654], 99.95th=[89654], 00:36:27.610 | 99.99th=[89654] 00:36:27.610 bw ( KiB/s): min=27648, max=46848, per=34.82%, avg=36284.00, stdev=6106.57, samples=10 00:36:27.610 iops : min= 216, max= 366, avg=283.40, stdev=47.61, samples=10 00:36:27.610 lat (msec) : 4=5.63%, 10=81.55%, 20=2.82%, 50=8.59%, 100=1.41% 00:36:27.610 cpu : usr=96.67%, sys=2.72%, ctx=319, majf=0, minf=1637 00:36:27.610 IO depths : 1=0.4%, 2=99.6%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:27.610 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:27.610 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:27.610 issued rwts: total=1420,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:27.610 latency : target=0, window=0, percentile=100.00%, depth=3 00:36:27.610 filename0: (groupid=0, jobs=1): err= 0: pid=84562: Tue May 14 04:33:42 2024 00:36:27.610 read: IOPS=256, BW=32.0MiB/s (33.6MB/s)(160MiB/5007msec) 00:36:27.610 slat (nsec): min=4235, max=24398, avg=7397.71, stdev=1500.09 00:36:27.610 clat (usec): min=3410, max=91281, avg=11697.86, stdev=13899.22 00:36:27.610 lat (usec): min=3417, max=91288, avg=11705.26, stdev=13899.31 00:36:27.610 clat percentiles (usec): 00:36:27.610 | 1.00th=[ 3654], 5.00th=[ 4047], 10.00th=[ 4359], 20.00th=[ 5211], 00:36:27.610 | 30.00th=[ 5669], 40.00th=[ 5997], 50.00th=[ 6456], 60.00th=[ 7046], 00:36:27.610 | 70.00th=[ 8225], 80.00th=[ 9241], 90.00th=[46400], 95.00th=[47973], 00:36:27.610 | 99.00th=[50070], 99.50th=[50594], 99.90th=[52691], 99.95th=[91751], 00:36:27.610 | 99.99th=[91751] 00:36:27.610 bw ( KiB/s): min=17920, max=53504, per=31.45%, avg=32775.60, stdev=9650.80, samples=10 00:36:27.610 iops : min= 140, max= 418, avg=256.00, stdev=75.36, samples=10 00:36:27.610 lat (msec) : 4=4.29%, 10=81.14%, 20=2.03%, 50=11.22%, 100=1.33% 00:36:27.610 cpu : usr=97.64%, sys=2.08%, ctx=8, majf=0, minf=1633 00:36:27.610 IO depths : 1=1.4%, 2=98.6%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:27.610 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:27.610 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:27.610 issued rwts: total=1283,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:27.610 latency : target=0, window=0, percentile=100.00%, depth=3 00:36:27.610 filename0: (groupid=0, jobs=1): err= 0: pid=84563: Tue May 14 04:33:42 2024 00:36:27.610 read: IOPS=280, BW=35.1MiB/s (36.8MB/s)(175MiB/5002msec) 00:36:27.610 slat (nsec): min=4986, max=21140, avg=7205.63, stdev=1351.21 00:36:27.610 clat (usec): min=3345, max=88675, avg=10684.81, stdev=13021.82 00:36:27.610 lat (usec): min=3352, max=88683, avg=10692.02, stdev=13021.99 00:36:27.610 clat percentiles (usec): 00:36:27.610 | 1.00th=[ 3818], 5.00th=[ 4113], 10.00th=[ 4228], 20.00th=[ 4555], 00:36:27.610 | 30.00th=[ 5014], 40.00th=[ 5866], 50.00th=[ 6325], 60.00th=[ 6915], 00:36:27.610 | 70.00th=[ 7832], 80.00th=[ 8979], 90.00th=[45876], 95.00th=[47973], 00:36:27.610 | 99.00th=[50594], 99.50th=[50594], 99.90th=[51119], 99.95th=[88605], 00:36:27.610 | 99.99th=[88605] 00:36:27.610 bw ( KiB/s): min=20224, max=46848, per=33.03%, avg=34417.78, stdev=9082.69, samples=9 00:36:27.610 iops : min= 158, max= 366, avg=268.89, stdev=70.96, samples=9 00:36:27.610 lat (msec) : 4=2.49%, 10=84.18%, 20=2.92%, 50=8.77%, 100=1.64% 00:36:27.610 cpu : usr=97.56%, sys=2.16%, ctx=7, majf=0, minf=1637 00:36:27.610 IO depths : 1=0.4%, 2=99.6%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:27.610 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:27.610 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:27.610 issued rwts: total=1403,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:27.610 latency : target=0, window=0, percentile=100.00%, depth=3 00:36:27.610 00:36:27.610 Run status group 0 (all jobs): 00:36:27.610 READ: bw=102MiB/s (107MB/s), 32.0MiB/s-35.2MiB/s (33.6MB/s-36.9MB/s), io=513MiB (538MB), run=5002-5044msec 00:36:28.177 ----------------------------------------------------- 00:36:28.177 Suppressions used: 00:36:28.177 count bytes template 00:36:28.177 5 44 /usr/src/fio/parse.c 00:36:28.177 1 8 libtcmalloc_minimal.so 00:36:28.177 1 904 libcrypto.so 00:36:28.177 ----------------------------------------------------- 00:36:28.177 00:36:28.177 04:33:42 -- target/dif.sh@107 -- # destroy_subsystems 0 00:36:28.178 04:33:42 -- target/dif.sh@43 -- # local sub 00:36:28.178 04:33:42 -- target/dif.sh@45 -- # for sub in "$@" 00:36:28.178 04:33:42 -- target/dif.sh@46 -- # destroy_subsystem 0 00:36:28.178 04:33:42 -- target/dif.sh@36 -- # local sub_id=0 00:36:28.178 04:33:42 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:36:28.178 04:33:42 -- common/autotest_common.sh@551 -- # xtrace_disable 00:36:28.178 04:33:42 -- common/autotest_common.sh@10 -- # set +x 00:36:28.178 04:33:42 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:36:28.178 04:33:42 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:36:28.178 04:33:42 -- common/autotest_common.sh@551 -- # xtrace_disable 00:36:28.178 04:33:42 -- common/autotest_common.sh@10 -- # set +x 00:36:28.178 04:33:42 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:36:28.178 04:33:42 -- target/dif.sh@109 -- # NULL_DIF=2 00:36:28.178 04:33:42 -- target/dif.sh@109 -- # bs=4k 00:36:28.178 04:33:42 -- target/dif.sh@109 -- # numjobs=8 00:36:28.178 04:33:42 -- target/dif.sh@109 -- # iodepth=16 00:36:28.178 04:33:42 -- target/dif.sh@109 -- # runtime= 00:36:28.178 04:33:42 -- target/dif.sh@109 -- # files=2 00:36:28.178 04:33:42 -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:36:28.178 04:33:42 -- target/dif.sh@28 -- # local sub 00:36:28.178 04:33:42 -- target/dif.sh@30 -- # for sub in "$@" 00:36:28.178 04:33:42 -- target/dif.sh@31 -- # create_subsystem 0 00:36:28.178 04:33:42 -- target/dif.sh@18 -- # local sub_id=0 00:36:28.178 04:33:42 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:36:28.178 04:33:42 -- common/autotest_common.sh@551 -- # xtrace_disable 00:36:28.178 04:33:42 -- common/autotest_common.sh@10 -- # set +x 00:36:28.178 bdev_null0 00:36:28.178 04:33:42 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:36:28.178 04:33:42 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:36:28.178 04:33:42 -- common/autotest_common.sh@551 -- # xtrace_disable 00:36:28.178 04:33:42 -- common/autotest_common.sh@10 -- # set +x 00:36:28.178 04:33:42 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:36:28.178 04:33:42 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:36:28.178 04:33:42 -- common/autotest_common.sh@551 -- # xtrace_disable 00:36:28.178 04:33:42 -- common/autotest_common.sh@10 -- # set +x 00:36:28.178 04:33:42 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:36:28.178 04:33:42 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:36:28.178 04:33:42 -- common/autotest_common.sh@551 -- # xtrace_disable 00:36:28.178 04:33:42 -- common/autotest_common.sh@10 -- # set +x 00:36:28.178 [2024-05-14 04:33:42.653616] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:36:28.178 04:33:42 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:36:28.178 04:33:42 -- target/dif.sh@30 -- # for sub in "$@" 00:36:28.178 04:33:42 -- target/dif.sh@31 -- # create_subsystem 1 00:36:28.178 04:33:42 -- target/dif.sh@18 -- # local sub_id=1 00:36:28.178 04:33:42 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:36:28.178 04:33:42 -- common/autotest_common.sh@551 -- # xtrace_disable 00:36:28.178 04:33:42 -- common/autotest_common.sh@10 -- # set +x 00:36:28.178 bdev_null1 00:36:28.178 04:33:42 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:36:28.178 04:33:42 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:36:28.178 04:33:42 -- common/autotest_common.sh@551 -- # xtrace_disable 00:36:28.178 04:33:42 -- common/autotest_common.sh@10 -- # set +x 00:36:28.178 04:33:42 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:36:28.178 04:33:42 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:36:28.178 04:33:42 -- common/autotest_common.sh@551 -- # xtrace_disable 00:36:28.178 04:33:42 -- common/autotest_common.sh@10 -- # set +x 00:36:28.178 04:33:42 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:36:28.178 04:33:42 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:36:28.178 04:33:42 -- common/autotest_common.sh@551 -- # xtrace_disable 00:36:28.178 04:33:42 -- common/autotest_common.sh@10 -- # set +x 00:36:28.178 04:33:42 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:36:28.178 04:33:42 -- target/dif.sh@30 -- # for sub in "$@" 00:36:28.178 04:33:42 -- target/dif.sh@31 -- # create_subsystem 2 00:36:28.178 04:33:42 -- target/dif.sh@18 -- # local sub_id=2 00:36:28.178 04:33:42 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:36:28.178 04:33:42 -- common/autotest_common.sh@551 -- # xtrace_disable 00:36:28.178 04:33:42 -- common/autotest_common.sh@10 -- # set +x 00:36:28.178 bdev_null2 00:36:28.178 04:33:42 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:36:28.178 04:33:42 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:36:28.178 04:33:42 -- common/autotest_common.sh@551 -- # xtrace_disable 00:36:28.178 04:33:42 -- common/autotest_common.sh@10 -- # set +x 00:36:28.178 04:33:42 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:36:28.178 04:33:42 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:36:28.178 04:33:42 -- common/autotest_common.sh@551 -- # xtrace_disable 00:36:28.178 04:33:42 -- common/autotest_common.sh@10 -- # set +x 00:36:28.178 04:33:42 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:36:28.178 04:33:42 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:36:28.178 04:33:42 -- common/autotest_common.sh@551 -- # xtrace_disable 00:36:28.178 04:33:42 -- common/autotest_common.sh@10 -- # set +x 00:36:28.178 04:33:42 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:36:28.178 04:33:42 -- target/dif.sh@112 -- # fio /dev/fd/62 00:36:28.178 04:33:42 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:36:28.178 04:33:42 -- common/autotest_common.sh@1335 -- # fio_plugin /var/jenkins/workspace/dsa-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:36:28.178 04:33:42 -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:36:28.178 04:33:42 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:36:28.178 04:33:42 -- common/autotest_common.sh@1318 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:36:28.178 04:33:42 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:36:28.178 04:33:42 -- common/autotest_common.sh@1318 -- # local sanitizers 00:36:28.178 04:33:42 -- common/autotest_common.sh@1319 -- # local plugin=/var/jenkins/workspace/dsa-phy-autotest/spdk/build/fio/spdk_bdev 00:36:28.178 04:33:42 -- nvmf/common.sh@520 -- # config=() 00:36:28.178 04:33:42 -- common/autotest_common.sh@1320 -- # shift 00:36:28.178 04:33:42 -- nvmf/common.sh@520 -- # local subsystem config 00:36:28.178 04:33:42 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:36:28.178 04:33:42 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:36:28.178 04:33:42 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:36:28.178 04:33:42 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:36:28.178 { 00:36:28.178 "params": { 00:36:28.178 "name": "Nvme$subsystem", 00:36:28.178 "trtype": "$TEST_TRANSPORT", 00:36:28.178 "traddr": "$NVMF_FIRST_TARGET_IP", 00:36:28.178 "adrfam": "ipv4", 00:36:28.178 "trsvcid": "$NVMF_PORT", 00:36:28.178 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:36:28.178 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:36:28.178 "hdgst": ${hdgst:-false}, 00:36:28.178 "ddgst": ${ddgst:-false} 00:36:28.178 }, 00:36:28.178 "method": "bdev_nvme_attach_controller" 00:36:28.178 } 00:36:28.178 EOF 00:36:28.178 )") 00:36:28.178 04:33:42 -- target/dif.sh@82 -- # gen_fio_conf 00:36:28.178 04:33:42 -- target/dif.sh@54 -- # local file 00:36:28.178 04:33:42 -- target/dif.sh@56 -- # cat 00:36:28.178 04:33:42 -- nvmf/common.sh@542 -- # cat 00:36:28.178 04:33:42 -- common/autotest_common.sh@1324 -- # grep libasan 00:36:28.178 04:33:42 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/dsa-phy-autotest/spdk/build/fio/spdk_bdev 00:36:28.178 04:33:42 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:36:28.178 04:33:42 -- target/dif.sh@72 -- # (( file = 1 )) 00:36:28.178 04:33:42 -- target/dif.sh@72 -- # (( file <= files )) 00:36:28.178 04:33:42 -- target/dif.sh@73 -- # cat 00:36:28.178 04:33:42 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:36:28.178 04:33:42 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:36:28.178 { 00:36:28.178 "params": { 00:36:28.178 "name": "Nvme$subsystem", 00:36:28.178 "trtype": "$TEST_TRANSPORT", 00:36:28.178 "traddr": "$NVMF_FIRST_TARGET_IP", 00:36:28.178 "adrfam": "ipv4", 00:36:28.178 "trsvcid": "$NVMF_PORT", 00:36:28.178 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:36:28.178 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:36:28.178 "hdgst": ${hdgst:-false}, 00:36:28.178 "ddgst": ${ddgst:-false} 00:36:28.178 }, 00:36:28.178 "method": "bdev_nvme_attach_controller" 00:36:28.178 } 00:36:28.178 EOF 00:36:28.178 )") 00:36:28.178 04:33:42 -- target/dif.sh@72 -- # (( file++ )) 00:36:28.178 04:33:42 -- target/dif.sh@72 -- # (( file <= files )) 00:36:28.178 04:33:42 -- target/dif.sh@73 -- # cat 00:36:28.178 04:33:42 -- nvmf/common.sh@542 -- # cat 00:36:28.178 04:33:42 -- target/dif.sh@72 -- # (( file++ )) 00:36:28.178 04:33:42 -- target/dif.sh@72 -- # (( file <= files )) 00:36:28.178 04:33:42 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:36:28.178 04:33:42 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:36:28.178 { 00:36:28.178 "params": { 00:36:28.178 "name": "Nvme$subsystem", 00:36:28.178 "trtype": "$TEST_TRANSPORT", 00:36:28.178 "traddr": "$NVMF_FIRST_TARGET_IP", 00:36:28.178 "adrfam": "ipv4", 00:36:28.178 "trsvcid": "$NVMF_PORT", 00:36:28.178 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:36:28.178 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:36:28.178 "hdgst": ${hdgst:-false}, 00:36:28.178 "ddgst": ${ddgst:-false} 00:36:28.178 }, 00:36:28.178 "method": "bdev_nvme_attach_controller" 00:36:28.178 } 00:36:28.178 EOF 00:36:28.178 )") 00:36:28.178 04:33:42 -- nvmf/common.sh@542 -- # cat 00:36:28.178 04:33:42 -- nvmf/common.sh@544 -- # jq . 00:36:28.178 04:33:42 -- common/autotest_common.sh@1324 -- # asan_lib=/usr/lib64/libasan.so.8 00:36:28.178 04:33:42 -- common/autotest_common.sh@1325 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:36:28.178 04:33:42 -- common/autotest_common.sh@1326 -- # break 00:36:28.178 04:33:42 -- nvmf/common.sh@545 -- # IFS=, 00:36:28.178 04:33:42 -- common/autotest_common.sh@1331 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /var/jenkins/workspace/dsa-phy-autotest/spdk/build/fio/spdk_bdev' 00:36:28.178 04:33:42 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:36:28.178 "params": { 00:36:28.178 "name": "Nvme0", 00:36:28.178 "trtype": "tcp", 00:36:28.178 "traddr": "10.0.0.2", 00:36:28.178 "adrfam": "ipv4", 00:36:28.178 "trsvcid": "4420", 00:36:28.179 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:36:28.179 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:36:28.179 "hdgst": false, 00:36:28.179 "ddgst": false 00:36:28.179 }, 00:36:28.179 "method": "bdev_nvme_attach_controller" 00:36:28.179 },{ 00:36:28.179 "params": { 00:36:28.179 "name": "Nvme1", 00:36:28.179 "trtype": "tcp", 00:36:28.179 "traddr": "10.0.0.2", 00:36:28.179 "adrfam": "ipv4", 00:36:28.179 "trsvcid": "4420", 00:36:28.179 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:36:28.179 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:36:28.179 "hdgst": false, 00:36:28.179 "ddgst": false 00:36:28.179 }, 00:36:28.179 "method": "bdev_nvme_attach_controller" 00:36:28.179 },{ 00:36:28.179 "params": { 00:36:28.179 "name": "Nvme2", 00:36:28.179 "trtype": "tcp", 00:36:28.179 "traddr": "10.0.0.2", 00:36:28.179 "adrfam": "ipv4", 00:36:28.179 "trsvcid": "4420", 00:36:28.179 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:36:28.179 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:36:28.179 "hdgst": false, 00:36:28.179 "ddgst": false 00:36:28.179 }, 00:36:28.179 "method": "bdev_nvme_attach_controller" 00:36:28.179 }' 00:36:28.179 04:33:42 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:36:28.756 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:36:28.756 ... 00:36:28.756 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:36:28.756 ... 00:36:28.756 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:36:28.756 ... 00:36:28.756 fio-3.35 00:36:28.756 Starting 24 threads 00:36:28.756 EAL: No free 2048 kB hugepages reported on node 1 00:36:29.688 [2024-05-14 04:33:43.922988] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:36:29.688 [2024-05-14 04:33:43.923052] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:36:39.661 00:36:39.661 filename0: (groupid=0, jobs=1): err= 0: pid=86204: Tue May 14 04:33:54 2024 00:36:39.661 read: IOPS=536, BW=2147KiB/s (2198kB/s)(21.0MiB/10017msec) 00:36:39.661 slat (usec): min=6, max=172, avg=49.04, stdev=25.99 00:36:39.661 clat (usec): min=17049, max=60875, avg=29397.81, stdev=2296.90 00:36:39.661 lat (usec): min=17056, max=60926, avg=29446.85, stdev=2297.57 00:36:39.661 clat percentiles (usec): 00:36:39.661 | 1.00th=[23987], 5.00th=[27657], 10.00th=[28181], 20.00th=[28443], 00:36:39.661 | 30.00th=[28967], 40.00th=[29230], 50.00th=[29492], 60.00th=[29754], 00:36:39.661 | 70.00th=[29754], 80.00th=[30016], 90.00th=[30540], 95.00th=[30802], 00:36:39.661 | 99.00th=[34866], 99.50th=[35914], 99.90th=[60556], 99.95th=[61080], 00:36:39.661 | 99.99th=[61080] 00:36:39.661 bw ( KiB/s): min= 2048, max= 2176, per=4.16%, avg=2143.35, stdev=53.62, samples=20 00:36:39.661 iops : min= 512, max= 544, avg=535.80, stdev=13.39, samples=20 00:36:39.661 lat (msec) : 20=0.41%, 50=99.29%, 100=0.30% 00:36:39.661 cpu : usr=99.13%, sys=0.45%, ctx=16, majf=0, minf=1633 00:36:39.661 IO depths : 1=4.9%, 2=10.8%, 4=23.9%, 8=52.8%, 16=7.6%, 32=0.0%, >=64=0.0% 00:36:39.661 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:39.661 complete : 0=0.0%, 4=94.0%, 8=0.2%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:39.661 issued rwts: total=5376,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:39.661 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:39.661 filename0: (groupid=0, jobs=1): err= 0: pid=86205: Tue May 14 04:33:54 2024 00:36:39.661 read: IOPS=536, BW=2147KiB/s (2198kB/s)(21.0MiB/10017msec) 00:36:39.661 slat (usec): min=6, max=246, avg=51.72, stdev=26.04 00:36:39.661 clat (usec): min=17790, max=63304, avg=29382.02, stdev=2367.61 00:36:39.661 lat (usec): min=17798, max=63365, avg=29433.73, stdev=2367.86 00:36:39.661 clat percentiles (usec): 00:36:39.661 | 1.00th=[23987], 5.00th=[27657], 10.00th=[28181], 20.00th=[28443], 00:36:39.661 | 30.00th=[28967], 40.00th=[29230], 50.00th=[29230], 60.00th=[29492], 00:36:39.661 | 70.00th=[29754], 80.00th=[30016], 90.00th=[30540], 95.00th=[30802], 00:36:39.661 | 99.00th=[34341], 99.50th=[39060], 99.90th=[63177], 99.95th=[63177], 00:36:39.661 | 99.99th=[63177] 00:36:39.661 bw ( KiB/s): min= 2000, max= 2224, per=4.16%, avg=2142.95, stdev=61.38, samples=20 00:36:39.661 iops : min= 500, max= 556, avg=535.80, stdev=15.33, samples=20 00:36:39.661 lat (msec) : 20=0.50%, 50=99.20%, 100=0.30% 00:36:39.661 cpu : usr=98.95%, sys=0.62%, ctx=15, majf=0, minf=1632 00:36:39.661 IO depths : 1=5.2%, 2=10.9%, 4=23.6%, 8=53.0%, 16=7.3%, 32=0.0%, >=64=0.0% 00:36:39.661 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:39.661 complete : 0=0.0%, 4=93.8%, 8=0.4%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:39.661 issued rwts: total=5376,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:39.661 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:39.661 filename0: (groupid=0, jobs=1): err= 0: pid=86206: Tue May 14 04:33:54 2024 00:36:39.661 read: IOPS=539, BW=2159KiB/s (2210kB/s)(21.1MiB/10003msec) 00:36:39.661 slat (usec): min=5, max=287, avg=56.73, stdev=29.16 00:36:39.661 clat (usec): min=8316, max=48879, avg=29150.40, stdev=2194.33 00:36:39.661 lat (usec): min=8324, max=48923, avg=29207.12, stdev=2195.42 00:36:39.661 clat percentiles (usec): 00:36:39.661 | 1.00th=[19006], 5.00th=[27395], 10.00th=[27919], 20.00th=[28443], 00:36:39.661 | 30.00th=[28705], 40.00th=[28967], 50.00th=[29230], 60.00th=[29492], 00:36:39.661 | 70.00th=[29754], 80.00th=[30016], 90.00th=[30278], 95.00th=[30540], 00:36:39.661 | 99.00th=[34866], 99.50th=[40109], 99.90th=[48497], 99.95th=[49021], 00:36:39.661 | 99.99th=[49021] 00:36:39.661 bw ( KiB/s): min= 2048, max= 2304, per=4.17%, avg=2149.05, stdev=68.52, samples=19 00:36:39.661 iops : min= 512, max= 576, avg=537.26, stdev=17.13, samples=19 00:36:39.661 lat (msec) : 10=0.07%, 20=1.00%, 50=98.93% 00:36:39.661 cpu : usr=99.02%, sys=0.59%, ctx=15, majf=0, minf=1636 00:36:39.661 IO depths : 1=5.1%, 2=10.8%, 4=23.1%, 8=53.2%, 16=7.8%, 32=0.0%, >=64=0.0% 00:36:39.661 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:39.661 complete : 0=0.0%, 4=93.7%, 8=0.8%, 16=5.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:39.661 issued rwts: total=5398,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:39.661 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:39.661 filename0: (groupid=0, jobs=1): err= 0: pid=86207: Tue May 14 04:33:54 2024 00:36:39.661 read: IOPS=534, BW=2140KiB/s (2191kB/s)(20.9MiB/10019msec) 00:36:39.661 slat (usec): min=6, max=155, avg=56.63, stdev=28.69 00:36:39.661 clat (usec): min=23625, max=81161, avg=29439.16, stdev=2982.31 00:36:39.661 lat (usec): min=23632, max=81187, avg=29495.79, stdev=2977.13 00:36:39.661 clat percentiles (usec): 00:36:39.661 | 1.00th=[26870], 5.00th=[27657], 10.00th=[28181], 20.00th=[28443], 00:36:39.661 | 30.00th=[28967], 40.00th=[28967], 50.00th=[29230], 60.00th=[29492], 00:36:39.661 | 70.00th=[29754], 80.00th=[30016], 90.00th=[30540], 95.00th=[30540], 00:36:39.661 | 99.00th=[31327], 99.50th=[31851], 99.90th=[81265], 99.95th=[81265], 00:36:39.661 | 99.99th=[81265] 00:36:39.661 bw ( KiB/s): min= 1920, max= 2192, per=4.16%, avg=2142.32, stdev=72.13, samples=19 00:36:39.661 iops : min= 480, max= 548, avg=535.58, stdev=18.03, samples=19 00:36:39.662 lat (msec) : 50=99.70%, 100=0.30% 00:36:39.662 cpu : usr=99.10%, sys=0.50%, ctx=15, majf=0, minf=1631 00:36:39.662 IO depths : 1=6.1%, 2=12.4%, 4=24.9%, 8=50.2%, 16=6.4%, 32=0.0%, >=64=0.0% 00:36:39.662 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:39.662 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:39.662 issued rwts: total=5360,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:39.662 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:39.662 filename0: (groupid=0, jobs=1): err= 0: pid=86208: Tue May 14 04:33:54 2024 00:36:39.662 read: IOPS=537, BW=2150KiB/s (2202kB/s)(21.0MiB/10002msec) 00:36:39.662 slat (usec): min=5, max=321, avg=51.75, stdev=40.52 00:36:39.662 clat (usec): min=22403, max=39798, avg=29375.81, stdev=1224.26 00:36:39.662 lat (usec): min=22411, max=39846, avg=29427.56, stdev=1222.39 00:36:39.662 clat percentiles (usec): 00:36:39.662 | 1.00th=[26084], 5.00th=[27919], 10.00th=[28181], 20.00th=[28705], 00:36:39.662 | 30.00th=[28967], 40.00th=[29230], 50.00th=[29492], 60.00th=[29492], 00:36:39.662 | 70.00th=[29754], 80.00th=[30016], 90.00th=[30540], 95.00th=[30802], 00:36:39.662 | 99.00th=[32113], 99.50th=[36439], 99.90th=[39584], 99.95th=[39584], 00:36:39.662 | 99.99th=[39584] 00:36:39.662 bw ( KiB/s): min= 2048, max= 2176, per=4.17%, avg=2149.05, stdev=53.61, samples=19 00:36:39.662 iops : min= 512, max= 544, avg=537.26, stdev=13.40, samples=19 00:36:39.662 lat (msec) : 50=100.00% 00:36:39.662 cpu : usr=97.00%, sys=1.43%, ctx=117, majf=0, minf=1635 00:36:39.662 IO depths : 1=5.9%, 2=12.0%, 4=24.5%, 8=51.1%, 16=6.6%, 32=0.0%, >=64=0.0% 00:36:39.662 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:39.662 complete : 0=0.0%, 4=94.0%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:39.662 issued rwts: total=5376,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:39.662 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:39.662 filename0: (groupid=0, jobs=1): err= 0: pid=86209: Tue May 14 04:33:54 2024 00:36:39.662 read: IOPS=537, BW=2149KiB/s (2201kB/s)(21.0MiB/10006msec) 00:36:39.662 slat (usec): min=6, max=243, avg=66.04, stdev=39.27 00:36:39.662 clat (usec): min=18455, max=51639, avg=29091.04, stdev=1600.09 00:36:39.662 lat (usec): min=18467, max=51664, avg=29157.08, stdev=1602.07 00:36:39.662 clat percentiles (usec): 00:36:39.662 | 1.00th=[26870], 5.00th=[27657], 10.00th=[27919], 20.00th=[28443], 00:36:39.662 | 30.00th=[28705], 40.00th=[28967], 50.00th=[28967], 60.00th=[29230], 00:36:39.662 | 70.00th=[29492], 80.00th=[29754], 90.00th=[30016], 95.00th=[30278], 00:36:39.662 | 99.00th=[31065], 99.50th=[31327], 99.90th=[51643], 99.95th=[51643], 00:36:39.662 | 99.99th=[51643] 00:36:39.662 bw ( KiB/s): min= 2048, max= 2304, per=4.17%, avg=2149.05, stdev=68.52, samples=19 00:36:39.662 iops : min= 512, max= 576, avg=537.26, stdev=17.13, samples=19 00:36:39.662 lat (msec) : 20=0.30%, 50=99.40%, 100=0.30% 00:36:39.662 cpu : usr=97.13%, sys=1.50%, ctx=72, majf=0, minf=1634 00:36:39.662 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:36:39.662 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:39.662 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:39.662 issued rwts: total=5376,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:39.662 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:39.662 filename0: (groupid=0, jobs=1): err= 0: pid=86210: Tue May 14 04:33:54 2024 00:36:39.662 read: IOPS=538, BW=2154KiB/s (2206kB/s)(21.1MiB/10024msec) 00:36:39.662 slat (usec): min=5, max=241, avg=61.40, stdev=44.40 00:36:39.662 clat (usec): min=8839, max=45097, avg=29134.00, stdev=1609.69 00:36:39.662 lat (usec): min=8851, max=45109, avg=29195.40, stdev=1613.75 00:36:39.662 clat percentiles (usec): 00:36:39.662 | 1.00th=[25035], 5.00th=[27657], 10.00th=[28181], 20.00th=[28443], 00:36:39.662 | 30.00th=[28705], 40.00th=[28967], 50.00th=[29230], 60.00th=[29492], 00:36:39.662 | 70.00th=[29492], 80.00th=[29754], 90.00th=[30278], 95.00th=[30540], 00:36:39.662 | 99.00th=[33424], 99.50th=[38536], 99.90th=[43779], 99.95th=[44827], 00:36:39.662 | 99.99th=[45351] 00:36:39.662 bw ( KiB/s): min= 2048, max= 2224, per=4.18%, avg=2152.80, stdev=54.81, samples=20 00:36:39.662 iops : min= 512, max= 556, avg=538.20, stdev=13.70, samples=20 00:36:39.662 lat (msec) : 10=0.04%, 20=0.33%, 50=99.63% 00:36:39.662 cpu : usr=98.63%, sys=0.86%, ctx=100, majf=0, minf=1634 00:36:39.662 IO depths : 1=5.4%, 2=11.0%, 4=22.5%, 8=53.4%, 16=7.7%, 32=0.0%, >=64=0.0% 00:36:39.662 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:39.662 complete : 0=0.0%, 4=93.6%, 8=1.0%, 16=5.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:39.662 issued rwts: total=5398,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:39.662 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:39.662 filename0: (groupid=0, jobs=1): err= 0: pid=86211: Tue May 14 04:33:54 2024 00:36:39.662 read: IOPS=550, BW=2201KiB/s (2254kB/s)(21.5MiB/10002msec) 00:36:39.662 slat (usec): min=5, max=245, avg=25.52, stdev=31.28 00:36:39.662 clat (usec): min=1356, max=38978, avg=28869.57, stdev=3948.33 00:36:39.662 lat (usec): min=1367, max=38999, avg=28895.10, stdev=3950.49 00:36:39.662 clat percentiles (usec): 00:36:39.662 | 1.00th=[ 5342], 5.00th=[27919], 10.00th=[28181], 20.00th=[28705], 00:36:39.662 | 30.00th=[28967], 40.00th=[29230], 50.00th=[29492], 60.00th=[29754], 00:36:39.662 | 70.00th=[30016], 80.00th=[30278], 90.00th=[30540], 95.00th=[30802], 00:36:39.662 | 99.00th=[31327], 99.50th=[32637], 99.90th=[38536], 99.95th=[39060], 00:36:39.662 | 99.99th=[39060] 00:36:39.662 bw ( KiB/s): min= 2048, max= 2688, per=4.27%, avg=2202.95, stdev=145.19, samples=19 00:36:39.662 iops : min= 512, max= 672, avg=550.74, stdev=36.30, samples=19 00:36:39.662 lat (msec) : 2=0.87%, 10=1.20%, 20=0.55%, 50=97.38% 00:36:39.662 cpu : usr=99.01%, sys=0.58%, ctx=39, majf=0, minf=1635 00:36:39.662 IO depths : 1=5.9%, 2=12.1%, 4=24.6%, 8=50.7%, 16=6.6%, 32=0.0%, >=64=0.0% 00:36:39.662 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:39.662 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:39.662 issued rwts: total=5504,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:39.662 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:39.662 filename1: (groupid=0, jobs=1): err= 0: pid=86212: Tue May 14 04:33:54 2024 00:36:39.662 read: IOPS=538, BW=2156KiB/s (2208kB/s)(21.1MiB/10004msec) 00:36:39.662 slat (usec): min=5, max=218, avg=42.82, stdev=43.79 00:36:39.662 clat (usec): min=7703, max=47447, avg=29338.98, stdev=1594.98 00:36:39.662 lat (usec): min=7723, max=47458, avg=29381.80, stdev=1589.27 00:36:39.662 clat percentiles (usec): 00:36:39.662 | 1.00th=[27132], 5.00th=[27919], 10.00th=[28181], 20.00th=[28443], 00:36:39.662 | 30.00th=[28967], 40.00th=[29230], 50.00th=[29492], 60.00th=[29492], 00:36:39.662 | 70.00th=[29754], 80.00th=[30278], 90.00th=[30540], 95.00th=[30802], 00:36:39.662 | 99.00th=[31851], 99.50th=[32637], 99.90th=[47449], 99.95th=[47449], 00:36:39.662 | 99.99th=[47449] 00:36:39.662 bw ( KiB/s): min= 2048, max= 2176, per=4.18%, avg=2155.79, stdev=47.95, samples=19 00:36:39.662 iops : min= 512, max= 544, avg=538.95, stdev=11.99, samples=19 00:36:39.662 lat (msec) : 10=0.04%, 20=0.37%, 50=99.59% 00:36:39.662 cpu : usr=98.93%, sys=0.60%, ctx=68, majf=0, minf=1637 00:36:39.662 IO depths : 1=6.1%, 2=12.4%, 4=24.9%, 8=50.2%, 16=6.4%, 32=0.0%, >=64=0.0% 00:36:39.662 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:39.662 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:39.662 issued rwts: total=5392,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:39.662 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:39.662 filename1: (groupid=0, jobs=1): err= 0: pid=86213: Tue May 14 04:33:54 2024 00:36:39.662 read: IOPS=537, BW=2152KiB/s (2203kB/s)(21.1MiB/10023msec) 00:36:39.662 slat (usec): min=5, max=215, avg=58.21, stdev=39.01 00:36:39.662 clat (usec): min=15243, max=40309, avg=29221.27, stdev=1333.11 00:36:39.662 lat (usec): min=15251, max=40348, avg=29279.48, stdev=1332.30 00:36:39.662 clat percentiles (usec): 00:36:39.662 | 1.00th=[26608], 5.00th=[27657], 10.00th=[28181], 20.00th=[28443], 00:36:39.662 | 30.00th=[28705], 40.00th=[28967], 50.00th=[29230], 60.00th=[29492], 00:36:39.662 | 70.00th=[29754], 80.00th=[30016], 90.00th=[30278], 95.00th=[30540], 00:36:39.662 | 99.00th=[32637], 99.50th=[33817], 99.90th=[40109], 99.95th=[40109], 00:36:39.662 | 99.99th=[40109] 00:36:39.662 bw ( KiB/s): min= 2048, max= 2180, per=4.17%, avg=2150.60, stdev=52.64, samples=20 00:36:39.662 iops : min= 512, max= 545, avg=537.65, stdev=13.16, samples=20 00:36:39.662 lat (msec) : 20=0.33%, 50=99.67% 00:36:39.662 cpu : usr=98.99%, sys=0.57%, ctx=41, majf=0, minf=1634 00:36:39.662 IO depths : 1=5.8%, 2=11.9%, 4=24.6%, 8=51.0%, 16=6.7%, 32=0.0%, >=64=0.0% 00:36:39.662 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:39.662 complete : 0=0.0%, 4=94.0%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:39.662 issued rwts: total=5392,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:39.662 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:39.662 filename1: (groupid=0, jobs=1): err= 0: pid=86214: Tue May 14 04:33:54 2024 00:36:39.662 read: IOPS=537, BW=2149KiB/s (2200kB/s)(21.0MiB/10008msec) 00:36:39.662 slat (usec): min=6, max=213, avg=66.02, stdev=38.66 00:36:39.662 clat (usec): min=18417, max=53624, avg=29106.37, stdev=1693.37 00:36:39.662 lat (usec): min=18449, max=53651, avg=29172.39, stdev=1695.19 00:36:39.662 clat percentiles (usec): 00:36:39.662 | 1.00th=[26870], 5.00th=[27657], 10.00th=[27919], 20.00th=[28443], 00:36:39.662 | 30.00th=[28705], 40.00th=[28967], 50.00th=[28967], 60.00th=[29230], 00:36:39.662 | 70.00th=[29492], 80.00th=[29754], 90.00th=[30016], 95.00th=[30278], 00:36:39.662 | 99.00th=[31065], 99.50th=[31327], 99.90th=[53740], 99.95th=[53740], 00:36:39.662 | 99.99th=[53740] 00:36:39.662 bw ( KiB/s): min= 2048, max= 2176, per=4.16%, avg=2144.00, stdev=56.87, samples=20 00:36:39.662 iops : min= 512, max= 544, avg=536.00, stdev=14.22, samples=20 00:36:39.662 lat (msec) : 20=0.30%, 50=99.40%, 100=0.30% 00:36:39.662 cpu : usr=98.85%, sys=0.70%, ctx=107, majf=0, minf=1633 00:36:39.662 IO depths : 1=6.2%, 2=12.4%, 4=24.9%, 8=50.2%, 16=6.3%, 32=0.0%, >=64=0.0% 00:36:39.662 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:39.662 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:39.662 issued rwts: total=5376,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:39.662 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:39.662 filename1: (groupid=0, jobs=1): err= 0: pid=86215: Tue May 14 04:33:54 2024 00:36:39.662 read: IOPS=536, BW=2147KiB/s (2199kB/s)(21.0MiB/10014msec) 00:36:39.662 slat (usec): min=4, max=258, avg=60.18, stdev=34.35 00:36:39.662 clat (usec): min=18544, max=59318, avg=29215.87, stdev=1965.22 00:36:39.662 lat (usec): min=18563, max=59338, avg=29276.05, stdev=1965.17 00:36:39.662 clat percentiles (usec): 00:36:39.663 | 1.00th=[27132], 5.00th=[27657], 10.00th=[27919], 20.00th=[28443], 00:36:39.663 | 30.00th=[28705], 40.00th=[28967], 50.00th=[29230], 60.00th=[29492], 00:36:39.663 | 70.00th=[29492], 80.00th=[29754], 90.00th=[30278], 95.00th=[30540], 00:36:39.663 | 99.00th=[31589], 99.50th=[33817], 99.90th=[59507], 99.95th=[59507], 00:36:39.663 | 99.99th=[59507] 00:36:39.663 bw ( KiB/s): min= 2048, max= 2176, per=4.16%, avg=2144.20, stdev=56.52, samples=20 00:36:39.663 iops : min= 512, max= 544, avg=536.05, stdev=14.13, samples=20 00:36:39.663 lat (msec) : 20=0.30%, 50=99.40%, 100=0.30% 00:36:39.663 cpu : usr=98.84%, sys=0.71%, ctx=35, majf=0, minf=1636 00:36:39.663 IO depths : 1=6.1%, 2=12.3%, 4=24.7%, 8=50.5%, 16=6.4%, 32=0.0%, >=64=0.0% 00:36:39.663 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:39.663 complete : 0=0.0%, 4=94.0%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:39.663 issued rwts: total=5376,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:39.663 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:39.663 filename1: (groupid=0, jobs=1): err= 0: pid=86216: Tue May 14 04:33:54 2024 00:36:39.663 read: IOPS=522, BW=2092KiB/s (2142kB/s)(20.4MiB/10006msec) 00:36:39.663 slat (usec): min=6, max=195, avg=44.38, stdev=37.22 00:36:39.663 clat (usec): min=8448, max=54491, avg=30234.09, stdev=4543.94 00:36:39.663 lat (usec): min=8468, max=54501, avg=30278.48, stdev=4537.84 00:36:39.663 clat percentiles (usec): 00:36:39.663 | 1.00th=[18744], 5.00th=[27132], 10.00th=[27657], 20.00th=[28443], 00:36:39.663 | 30.00th=[28967], 40.00th=[29230], 50.00th=[29492], 60.00th=[29754], 00:36:39.663 | 70.00th=[30016], 80.00th=[30540], 90.00th=[33817], 95.00th=[39584], 00:36:39.663 | 99.00th=[50594], 99.50th=[52691], 99.90th=[54264], 99.95th=[54264], 00:36:39.663 | 99.99th=[54264] 00:36:39.663 bw ( KiB/s): min= 1776, max= 2304, per=4.05%, avg=2086.74, stdev=125.68, samples=19 00:36:39.663 iops : min= 444, max= 576, avg=521.68, stdev=31.42, samples=19 00:36:39.663 lat (msec) : 10=0.15%, 20=1.41%, 50=97.32%, 100=1.11% 00:36:39.663 cpu : usr=98.38%, sys=0.87%, ctx=24, majf=0, minf=1636 00:36:39.663 IO depths : 1=2.0%, 2=5.2%, 4=15.1%, 8=65.3%, 16=12.4%, 32=0.0%, >=64=0.0% 00:36:39.663 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:39.663 complete : 0=0.0%, 4=92.2%, 8=3.8%, 16=4.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:39.663 issued rwts: total=5232,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:39.663 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:39.663 filename1: (groupid=0, jobs=1): err= 0: pid=86217: Tue May 14 04:33:54 2024 00:36:39.663 read: IOPS=538, BW=2152KiB/s (2204kB/s)(21.1MiB/10021msec) 00:36:39.663 slat (usec): min=5, max=151, avg=52.64, stdev=26.82 00:36:39.663 clat (usec): min=17474, max=45385, avg=29300.91, stdev=1457.18 00:36:39.663 lat (usec): min=17481, max=45400, avg=29353.55, stdev=1455.98 00:36:39.663 clat percentiles (usec): 00:36:39.663 | 1.00th=[25297], 5.00th=[27657], 10.00th=[28181], 20.00th=[28443], 00:36:39.663 | 30.00th=[28967], 40.00th=[28967], 50.00th=[29230], 60.00th=[29492], 00:36:39.663 | 70.00th=[29754], 80.00th=[30016], 90.00th=[30540], 95.00th=[30802], 00:36:39.663 | 99.00th=[32637], 99.50th=[35390], 99.90th=[44303], 99.95th=[45351], 00:36:39.663 | 99.99th=[45351] 00:36:39.663 bw ( KiB/s): min= 2048, max= 2176, per=4.18%, avg=2155.79, stdev=43.28, samples=19 00:36:39.663 iops : min= 512, max= 544, avg=538.95, stdev=10.82, samples=19 00:36:39.663 lat (msec) : 20=0.30%, 50=99.70% 00:36:39.663 cpu : usr=99.06%, sys=0.53%, ctx=13, majf=0, minf=1637 00:36:39.663 IO depths : 1=6.0%, 2=12.1%, 4=24.7%, 8=50.7%, 16=6.5%, 32=0.0%, >=64=0.0% 00:36:39.663 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:39.663 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:39.663 issued rwts: total=5392,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:39.663 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:39.663 filename1: (groupid=0, jobs=1): err= 0: pid=86218: Tue May 14 04:33:54 2024 00:36:39.663 read: IOPS=537, BW=2149KiB/s (2201kB/s)(21.0MiB/10005msec) 00:36:39.663 slat (usec): min=6, max=139, avg=25.77, stdev=27.15 00:36:39.663 clat (usec): min=4605, max=76560, avg=29585.55, stdev=3207.25 00:36:39.663 lat (usec): min=4612, max=76588, avg=29611.32, stdev=3205.21 00:36:39.663 clat percentiles (usec): 00:36:39.663 | 1.00th=[18744], 5.00th=[27657], 10.00th=[28181], 20.00th=[28705], 00:36:39.663 | 30.00th=[28967], 40.00th=[29492], 50.00th=[29492], 60.00th=[29754], 00:36:39.663 | 70.00th=[30016], 80.00th=[30540], 90.00th=[30540], 95.00th=[31065], 00:36:39.663 | 99.00th=[42206], 99.50th=[47449], 99.90th=[57934], 99.95th=[57934], 00:36:39.663 | 99.99th=[77071] 00:36:39.663 bw ( KiB/s): min= 2036, max= 2288, per=4.16%, avg=2143.37, stdev=58.61, samples=19 00:36:39.663 iops : min= 509, max= 572, avg=535.84, stdev=14.65, samples=19 00:36:39.663 lat (msec) : 10=0.30%, 20=0.93%, 50=98.47%, 100=0.30% 00:36:39.663 cpu : usr=98.99%, sys=0.60%, ctx=13, majf=0, minf=1633 00:36:39.663 IO depths : 1=1.1%, 2=4.7%, 4=15.2%, 8=65.2%, 16=13.9%, 32=0.0%, >=64=0.0% 00:36:39.663 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:39.663 complete : 0=0.0%, 4=92.5%, 8=4.0%, 16=3.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:39.663 issued rwts: total=5376,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:39.663 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:39.663 filename1: (groupid=0, jobs=1): err= 0: pid=86219: Tue May 14 04:33:54 2024 00:36:39.663 read: IOPS=536, BW=2147KiB/s (2199kB/s)(21.0MiB/10014msec) 00:36:39.663 slat (usec): min=6, max=173, avg=60.42, stdev=25.89 00:36:39.663 clat (usec): min=18098, max=60464, avg=29278.80, stdev=2128.97 00:36:39.663 lat (usec): min=18105, max=60503, avg=29339.22, stdev=2126.15 00:36:39.663 clat percentiles (usec): 00:36:39.663 | 1.00th=[26870], 5.00th=[27657], 10.00th=[27919], 20.00th=[28443], 00:36:39.663 | 30.00th=[28705], 40.00th=[28967], 50.00th=[29230], 60.00th=[29492], 00:36:39.663 | 70.00th=[29754], 80.00th=[30016], 90.00th=[30278], 95.00th=[30540], 00:36:39.663 | 99.00th=[31327], 99.50th=[35914], 99.90th=[60556], 99.95th=[60556], 00:36:39.663 | 99.99th=[60556] 00:36:39.663 bw ( KiB/s): min= 2048, max= 2176, per=4.16%, avg=2144.00, stdev=56.87, samples=20 00:36:39.663 iops : min= 512, max= 544, avg=536.00, stdev=14.22, samples=20 00:36:39.663 lat (msec) : 20=0.45%, 50=99.26%, 100=0.30% 00:36:39.663 cpu : usr=98.97%, sys=0.63%, ctx=15, majf=0, minf=1636 00:36:39.663 IO depths : 1=6.1%, 2=12.3%, 4=24.9%, 8=50.2%, 16=6.4%, 32=0.0%, >=64=0.0% 00:36:39.663 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:39.663 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:39.663 issued rwts: total=5376,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:39.663 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:39.663 filename2: (groupid=0, jobs=1): err= 0: pid=86220: Tue May 14 04:33:54 2024 00:36:39.663 read: IOPS=535, BW=2142KiB/s (2193kB/s)(20.9MiB/10011msec) 00:36:39.663 slat (usec): min=5, max=185, avg=39.06, stdev=31.92 00:36:39.663 clat (usec): min=21628, max=89144, avg=29559.79, stdev=3128.36 00:36:39.663 lat (usec): min=21640, max=89169, avg=29598.85, stdev=3122.96 00:36:39.663 clat percentiles (usec): 00:36:39.663 | 1.00th=[26870], 5.00th=[27657], 10.00th=[28181], 20.00th=[28705], 00:36:39.663 | 30.00th=[28967], 40.00th=[29230], 50.00th=[29492], 60.00th=[29754], 00:36:39.663 | 70.00th=[29754], 80.00th=[30278], 90.00th=[30540], 95.00th=[30802], 00:36:39.663 | 99.00th=[33424], 99.50th=[33817], 99.90th=[82314], 99.95th=[88605], 00:36:39.663 | 99.99th=[89654] 00:36:39.663 bw ( KiB/s): min= 1923, max= 2176, per=4.16%, avg=2142.47, stdev=71.42, samples=19 00:36:39.663 iops : min= 480, max= 544, avg=535.58, stdev=17.98, samples=19 00:36:39.663 lat (msec) : 50=99.70%, 100=0.30% 00:36:39.663 cpu : usr=99.06%, sys=0.52%, ctx=13, majf=0, minf=1636 00:36:39.663 IO depths : 1=5.4%, 2=11.6%, 4=24.8%, 8=51.1%, 16=7.1%, 32=0.0%, >=64=0.0% 00:36:39.663 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:39.663 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:39.663 issued rwts: total=5360,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:39.663 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:39.663 filename2: (groupid=0, jobs=1): err= 0: pid=86221: Tue May 14 04:33:54 2024 00:36:39.663 read: IOPS=535, BW=2143KiB/s (2194kB/s)(20.9MiB/10007msec) 00:36:39.663 slat (usec): min=5, max=155, avg=47.88, stdev=28.00 00:36:39.663 clat (usec): min=10915, max=53487, avg=29449.72, stdev=2549.35 00:36:39.663 lat (usec): min=10928, max=53512, avg=29497.60, stdev=2548.61 00:36:39.663 clat percentiles (usec): 00:36:39.663 | 1.00th=[21103], 5.00th=[27657], 10.00th=[28181], 20.00th=[28443], 00:36:39.663 | 30.00th=[28967], 40.00th=[29230], 50.00th=[29492], 60.00th=[29492], 00:36:39.663 | 70.00th=[29754], 80.00th=[30016], 90.00th=[30540], 95.00th=[31065], 00:36:39.663 | 99.00th=[40633], 99.50th=[46924], 99.90th=[53216], 99.95th=[53216], 00:36:39.663 | 99.99th=[53740] 00:36:39.663 bw ( KiB/s): min= 2024, max= 2192, per=4.16%, avg=2142.32, stdev=56.98, samples=19 00:36:39.663 iops : min= 506, max= 548, avg=535.58, stdev=14.25, samples=19 00:36:39.663 lat (msec) : 20=0.50%, 50=99.20%, 100=0.30% 00:36:39.663 cpu : usr=99.05%, sys=0.55%, ctx=14, majf=0, minf=1636 00:36:39.663 IO depths : 1=4.3%, 2=9.5%, 4=21.5%, 8=55.9%, 16=8.8%, 32=0.0%, >=64=0.0% 00:36:39.663 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:39.663 complete : 0=0.0%, 4=93.3%, 8=1.4%, 16=5.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:39.663 issued rwts: total=5360,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:39.663 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:39.663 filename2: (groupid=0, jobs=1): err= 0: pid=86222: Tue May 14 04:33:54 2024 00:36:39.663 read: IOPS=541, BW=2165KiB/s (2217kB/s)(21.1MiB/10001msec) 00:36:39.663 slat (usec): min=5, max=245, avg=41.90, stdev=36.51 00:36:39.663 clat (usec): min=5893, max=54315, avg=29197.34, stdev=4858.00 00:36:39.663 lat (usec): min=5901, max=54337, avg=29239.24, stdev=4859.01 00:36:39.663 clat percentiles (usec): 00:36:39.663 | 1.00th=[ 8848], 5.00th=[25035], 10.00th=[28181], 20.00th=[28443], 00:36:39.663 | 30.00th=[28705], 40.00th=[28967], 50.00th=[29230], 60.00th=[29492], 00:36:39.663 | 70.00th=[29754], 80.00th=[30278], 90.00th=[30540], 95.00th=[31065], 00:36:39.663 | 99.00th=[49546], 99.50th=[52167], 99.90th=[53216], 99.95th=[53216], 00:36:39.663 | 99.99th=[54264] 00:36:39.663 bw ( KiB/s): min= 2048, max= 2356, per=4.20%, avg=2165.26, stdev=84.16, samples=19 00:36:39.663 iops : min= 512, max= 589, avg=541.32, stdev=21.04, samples=19 00:36:39.663 lat (msec) : 10=1.46%, 20=2.46%, 50=95.18%, 100=0.91% 00:36:39.663 cpu : usr=99.07%, sys=0.53%, ctx=14, majf=0, minf=1639 00:36:39.663 IO depths : 1=4.7%, 2=10.1%, 4=22.6%, 8=54.6%, 16=8.1%, 32=0.0%, >=64=0.0% 00:36:39.663 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:39.663 complete : 0=0.0%, 4=93.7%, 8=0.7%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:39.663 issued rwts: total=5414,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:39.663 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:39.663 filename2: (groupid=0, jobs=1): err= 0: pid=86223: Tue May 14 04:33:54 2024 00:36:39.663 read: IOPS=542, BW=2170KiB/s (2222kB/s)(21.2MiB/10008msec) 00:36:39.664 slat (usec): min=6, max=166, avg=19.09, stdev=26.66 00:36:39.664 clat (usec): min=4435, max=66036, avg=29345.48, stdev=5097.21 00:36:39.664 lat (usec): min=4443, max=66064, avg=29364.57, stdev=5096.17 00:36:39.664 clat percentiles (usec): 00:36:39.664 | 1.00th=[ 9372], 5.00th=[27132], 10.00th=[27919], 20.00th=[28443], 00:36:39.664 | 30.00th=[28967], 40.00th=[29230], 50.00th=[29492], 60.00th=[29754], 00:36:39.664 | 70.00th=[30016], 80.00th=[30540], 90.00th=[30540], 95.00th=[31065], 00:36:39.664 | 99.00th=[49546], 99.50th=[51643], 99.90th=[65799], 99.95th=[65799], 00:36:39.664 | 99.99th=[65799] 00:36:39.664 bw ( KiB/s): min= 2048, max= 2304, per=4.20%, avg=2165.60, stdev=80.05, samples=20 00:36:39.664 iops : min= 512, max= 576, avg=541.40, stdev=20.01, samples=20 00:36:39.664 lat (msec) : 10=1.40%, 20=2.14%, 50=95.65%, 100=0.81% 00:36:39.664 cpu : usr=98.87%, sys=0.70%, ctx=14, majf=0, minf=1634 00:36:39.664 IO depths : 1=5.0%, 2=10.9%, 4=23.7%, 8=52.9%, 16=7.6%, 32=0.0%, >=64=0.0% 00:36:39.664 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:39.664 complete : 0=0.0%, 4=93.9%, 8=0.3%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:39.664 issued rwts: total=5430,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:39.664 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:39.664 filename2: (groupid=0, jobs=1): err= 0: pid=86224: Tue May 14 04:33:54 2024 00:36:39.664 read: IOPS=535, BW=2144KiB/s (2195kB/s)(21.0MiB/10013msec) 00:36:39.664 slat (usec): min=3, max=138, avg=35.95, stdev=28.54 00:36:39.664 clat (usec): min=6370, max=76534, avg=29587.83, stdev=3690.21 00:36:39.664 lat (usec): min=6379, max=76554, avg=29623.78, stdev=3689.03 00:36:39.664 clat percentiles (usec): 00:36:39.664 | 1.00th=[16581], 5.00th=[27395], 10.00th=[28181], 20.00th=[28705], 00:36:39.664 | 30.00th=[28967], 40.00th=[29230], 50.00th=[29492], 60.00th=[29754], 00:36:39.664 | 70.00th=[30016], 80.00th=[30278], 90.00th=[30540], 95.00th=[31327], 00:36:39.664 | 99.00th=[47449], 99.50th=[49546], 99.90th=[58983], 99.95th=[76022], 00:36:39.664 | 99.99th=[76022] 00:36:39.664 bw ( KiB/s): min= 2012, max= 2224, per=4.15%, avg=2141.00, stdev=53.65, samples=20 00:36:39.664 iops : min= 503, max= 556, avg=535.25, stdev=13.41, samples=20 00:36:39.664 lat (msec) : 10=0.15%, 20=1.36%, 50=98.16%, 100=0.34% 00:36:39.664 cpu : usr=99.09%, sys=0.50%, ctx=16, majf=0, minf=1634 00:36:39.664 IO depths : 1=1.4%, 2=5.7%, 4=19.0%, 8=61.8%, 16=12.2%, 32=0.0%, >=64=0.0% 00:36:39.664 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:39.664 complete : 0=0.0%, 4=92.8%, 8=2.6%, 16=4.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:39.664 issued rwts: total=5366,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:39.664 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:39.664 filename2: (groupid=0, jobs=1): err= 0: pid=86225: Tue May 14 04:33:54 2024 00:36:39.664 read: IOPS=541, BW=2166KiB/s (2218kB/s)(21.2MiB/10018msec) 00:36:39.664 slat (usec): min=4, max=142, avg=19.51, stdev=17.47 00:36:39.664 clat (usec): min=7061, max=40130, avg=29395.21, stdev=1718.29 00:36:39.664 lat (usec): min=7070, max=40138, avg=29414.71, stdev=1719.60 00:36:39.664 clat percentiles (usec): 00:36:39.664 | 1.00th=[24249], 5.00th=[27919], 10.00th=[28181], 20.00th=[28705], 00:36:39.664 | 30.00th=[28967], 40.00th=[29230], 50.00th=[29492], 60.00th=[29754], 00:36:39.664 | 70.00th=[30016], 80.00th=[30278], 90.00th=[30540], 95.00th=[30802], 00:36:39.664 | 99.00th=[32375], 99.50th=[33817], 99.90th=[39060], 99.95th=[39060], 00:36:39.664 | 99.99th=[40109] 00:36:39.664 bw ( KiB/s): min= 2048, max= 2304, per=4.20%, avg=2162.95, stdev=67.96, samples=20 00:36:39.664 iops : min= 512, max= 576, avg=540.70, stdev=16.99, samples=20 00:36:39.664 lat (msec) : 10=0.04%, 20=0.68%, 50=99.28% 00:36:39.664 cpu : usr=99.14%, sys=0.45%, ctx=16, majf=0, minf=1634 00:36:39.664 IO depths : 1=5.2%, 2=11.3%, 4=24.6%, 8=51.5%, 16=7.3%, 32=0.0%, >=64=0.0% 00:36:39.664 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:39.664 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:39.664 issued rwts: total=5424,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:39.664 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:39.664 filename2: (groupid=0, jobs=1): err= 0: pid=86226: Tue May 14 04:33:54 2024 00:36:39.664 read: IOPS=533, BW=2136KiB/s (2187kB/s)(20.9MiB/10006msec) 00:36:39.664 slat (usec): min=6, max=173, avg=28.23, stdev=31.22 00:36:39.664 clat (usec): min=8123, max=53093, avg=29757.18, stdev=5496.43 00:36:39.664 lat (usec): min=8139, max=53129, avg=29785.42, stdev=5494.00 00:36:39.664 clat percentiles (usec): 00:36:39.664 | 1.00th=[10290], 5.00th=[20579], 10.00th=[27132], 20.00th=[28181], 00:36:39.664 | 30.00th=[28705], 40.00th=[29230], 50.00th=[29492], 60.00th=[29754], 00:36:39.664 | 70.00th=[30278], 80.00th=[30540], 90.00th=[33162], 95.00th=[39584], 00:36:39.664 | 99.00th=[51643], 99.50th=[52167], 99.90th=[53216], 99.95th=[53216], 00:36:39.664 | 99.99th=[53216] 00:36:39.664 bw ( KiB/s): min= 2016, max= 2320, per=4.14%, avg=2131.40, stdev=75.11, samples=20 00:36:39.664 iops : min= 504, max= 580, avg=532.85, stdev=18.78, samples=20 00:36:39.664 lat (msec) : 10=0.56%, 20=3.48%, 50=94.25%, 100=1.70% 00:36:39.664 cpu : usr=99.04%, sys=0.54%, ctx=14, majf=0, minf=1636 00:36:39.664 IO depths : 1=1.2%, 2=3.5%, 4=11.5%, 8=70.0%, 16=13.9%, 32=0.0%, >=64=0.0% 00:36:39.664 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:39.664 complete : 0=0.0%, 4=91.3%, 8=5.2%, 16=3.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:39.664 issued rwts: total=5342,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:39.664 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:39.664 filename2: (groupid=0, jobs=1): err= 0: pid=86227: Tue May 14 04:33:54 2024 00:36:39.664 read: IOPS=536, BW=2147KiB/s (2199kB/s)(21.0MiB/10014msec) 00:36:39.664 slat (usec): min=6, max=150, avg=32.75, stdev=29.72 00:36:39.664 clat (usec): min=7696, max=60776, avg=29528.37, stdev=2232.69 00:36:39.664 lat (usec): min=7707, max=60802, avg=29561.12, stdev=2230.58 00:36:39.664 clat percentiles (usec): 00:36:39.664 | 1.00th=[24249], 5.00th=[27657], 10.00th=[28181], 20.00th=[28705], 00:36:39.664 | 30.00th=[28967], 40.00th=[29230], 50.00th=[29492], 60.00th=[29754], 00:36:39.664 | 70.00th=[30016], 80.00th=[30278], 90.00th=[30540], 95.00th=[31065], 00:36:39.664 | 99.00th=[33817], 99.50th=[34866], 99.90th=[60556], 99.95th=[60556], 00:36:39.664 | 99.99th=[60556] 00:36:39.664 bw ( KiB/s): min= 2048, max= 2192, per=4.16%, avg=2144.00, stdev=55.43, samples=20 00:36:39.664 iops : min= 512, max= 548, avg=536.00, stdev=13.86, samples=20 00:36:39.664 lat (msec) : 10=0.04%, 50=99.67%, 100=0.30% 00:36:39.664 cpu : usr=98.92%, sys=0.67%, ctx=15, majf=0, minf=1635 00:36:39.664 IO depths : 1=3.5%, 2=8.6%, 4=22.2%, 8=56.4%, 16=9.2%, 32=0.0%, >=64=0.0% 00:36:39.664 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:39.664 complete : 0=0.0%, 4=93.6%, 8=0.8%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:39.664 issued rwts: total=5376,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:39.664 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:39.664 00:36:39.664 Run status group 0 (all jobs): 00:36:39.664 READ: bw=50.3MiB/s (52.8MB/s), 2092KiB/s-2201KiB/s (2142kB/s-2254kB/s), io=504MiB (529MB), run=10001-10024msec 00:36:40.231 ----------------------------------------------------- 00:36:40.231 Suppressions used: 00:36:40.231 count bytes template 00:36:40.231 45 402 /usr/src/fio/parse.c 00:36:40.231 1 8 libtcmalloc_minimal.so 00:36:40.231 1 904 libcrypto.so 00:36:40.231 ----------------------------------------------------- 00:36:40.231 00:36:40.231 04:33:54 -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:36:40.231 04:33:54 -- target/dif.sh@43 -- # local sub 00:36:40.231 04:33:54 -- target/dif.sh@45 -- # for sub in "$@" 00:36:40.231 04:33:54 -- target/dif.sh@46 -- # destroy_subsystem 0 00:36:40.231 04:33:54 -- target/dif.sh@36 -- # local sub_id=0 00:36:40.231 04:33:54 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:36:40.231 04:33:54 -- common/autotest_common.sh@551 -- # xtrace_disable 00:36:40.231 04:33:54 -- common/autotest_common.sh@10 -- # set +x 00:36:40.231 04:33:54 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:36:40.231 04:33:54 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:36:40.231 04:33:54 -- common/autotest_common.sh@551 -- # xtrace_disable 00:36:40.231 04:33:54 -- common/autotest_common.sh@10 -- # set +x 00:36:40.231 04:33:54 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:36:40.231 04:33:54 -- target/dif.sh@45 -- # for sub in "$@" 00:36:40.231 04:33:54 -- target/dif.sh@46 -- # destroy_subsystem 1 00:36:40.231 04:33:54 -- target/dif.sh@36 -- # local sub_id=1 00:36:40.231 04:33:54 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:36:40.231 04:33:54 -- common/autotest_common.sh@551 -- # xtrace_disable 00:36:40.231 04:33:54 -- common/autotest_common.sh@10 -- # set +x 00:36:40.231 04:33:54 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:36:40.231 04:33:54 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:36:40.231 04:33:54 -- common/autotest_common.sh@551 -- # xtrace_disable 00:36:40.231 04:33:54 -- common/autotest_common.sh@10 -- # set +x 00:36:40.231 04:33:54 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:36:40.231 04:33:54 -- target/dif.sh@45 -- # for sub in "$@" 00:36:40.231 04:33:54 -- target/dif.sh@46 -- # destroy_subsystem 2 00:36:40.231 04:33:54 -- target/dif.sh@36 -- # local sub_id=2 00:36:40.231 04:33:54 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:36:40.231 04:33:54 -- common/autotest_common.sh@551 -- # xtrace_disable 00:36:40.231 04:33:54 -- common/autotest_common.sh@10 -- # set +x 00:36:40.231 04:33:54 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:36:40.231 04:33:54 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:36:40.231 04:33:54 -- common/autotest_common.sh@551 -- # xtrace_disable 00:36:40.231 04:33:54 -- common/autotest_common.sh@10 -- # set +x 00:36:40.490 04:33:54 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:36:40.490 04:33:54 -- target/dif.sh@115 -- # NULL_DIF=1 00:36:40.490 04:33:54 -- target/dif.sh@115 -- # bs=8k,16k,128k 00:36:40.490 04:33:54 -- target/dif.sh@115 -- # numjobs=2 00:36:40.490 04:33:54 -- target/dif.sh@115 -- # iodepth=8 00:36:40.490 04:33:54 -- target/dif.sh@115 -- # runtime=5 00:36:40.490 04:33:54 -- target/dif.sh@115 -- # files=1 00:36:40.490 04:33:54 -- target/dif.sh@117 -- # create_subsystems 0 1 00:36:40.490 04:33:54 -- target/dif.sh@28 -- # local sub 00:36:40.490 04:33:54 -- target/dif.sh@30 -- # for sub in "$@" 00:36:40.490 04:33:54 -- target/dif.sh@31 -- # create_subsystem 0 00:36:40.490 04:33:54 -- target/dif.sh@18 -- # local sub_id=0 00:36:40.490 04:33:54 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:36:40.490 04:33:54 -- common/autotest_common.sh@551 -- # xtrace_disable 00:36:40.490 04:33:54 -- common/autotest_common.sh@10 -- # set +x 00:36:40.490 bdev_null0 00:36:40.490 04:33:54 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:36:40.490 04:33:54 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:36:40.490 04:33:54 -- common/autotest_common.sh@551 -- # xtrace_disable 00:36:40.490 04:33:54 -- common/autotest_common.sh@10 -- # set +x 00:36:40.490 04:33:54 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:36:40.490 04:33:54 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:36:40.490 04:33:54 -- common/autotest_common.sh@551 -- # xtrace_disable 00:36:40.490 04:33:54 -- common/autotest_common.sh@10 -- # set +x 00:36:40.490 04:33:54 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:36:40.490 04:33:54 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:36:40.490 04:33:54 -- common/autotest_common.sh@551 -- # xtrace_disable 00:36:40.491 04:33:54 -- common/autotest_common.sh@10 -- # set +x 00:36:40.491 [2024-05-14 04:33:54.846757] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:36:40.491 04:33:54 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:36:40.491 04:33:54 -- target/dif.sh@30 -- # for sub in "$@" 00:36:40.491 04:33:54 -- target/dif.sh@31 -- # create_subsystem 1 00:36:40.491 04:33:54 -- target/dif.sh@18 -- # local sub_id=1 00:36:40.491 04:33:54 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:36:40.491 04:33:54 -- common/autotest_common.sh@551 -- # xtrace_disable 00:36:40.491 04:33:54 -- common/autotest_common.sh@10 -- # set +x 00:36:40.491 bdev_null1 00:36:40.491 04:33:54 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:36:40.491 04:33:54 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:36:40.491 04:33:54 -- common/autotest_common.sh@551 -- # xtrace_disable 00:36:40.491 04:33:54 -- common/autotest_common.sh@10 -- # set +x 00:36:40.491 04:33:54 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:36:40.491 04:33:54 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:36:40.491 04:33:54 -- common/autotest_common.sh@551 -- # xtrace_disable 00:36:40.491 04:33:54 -- common/autotest_common.sh@10 -- # set +x 00:36:40.491 04:33:54 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:36:40.491 04:33:54 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:36:40.491 04:33:54 -- common/autotest_common.sh@551 -- # xtrace_disable 00:36:40.491 04:33:54 -- common/autotest_common.sh@10 -- # set +x 00:36:40.491 04:33:54 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:36:40.491 04:33:54 -- target/dif.sh@118 -- # fio /dev/fd/62 00:36:40.491 04:33:54 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:36:40.491 04:33:54 -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:36:40.491 04:33:54 -- common/autotest_common.sh@1335 -- # fio_plugin /var/jenkins/workspace/dsa-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:36:40.491 04:33:54 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:36:40.491 04:33:54 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:36:40.491 04:33:54 -- common/autotest_common.sh@1318 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:36:40.491 04:33:54 -- nvmf/common.sh@520 -- # config=() 00:36:40.491 04:33:54 -- common/autotest_common.sh@1318 -- # local sanitizers 00:36:40.491 04:33:54 -- nvmf/common.sh@520 -- # local subsystem config 00:36:40.491 04:33:54 -- common/autotest_common.sh@1319 -- # local plugin=/var/jenkins/workspace/dsa-phy-autotest/spdk/build/fio/spdk_bdev 00:36:40.491 04:33:54 -- target/dif.sh@82 -- # gen_fio_conf 00:36:40.491 04:33:54 -- common/autotest_common.sh@1320 -- # shift 00:36:40.491 04:33:54 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:36:40.491 04:33:54 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:36:40.491 04:33:54 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:36:40.491 { 00:36:40.491 "params": { 00:36:40.491 "name": "Nvme$subsystem", 00:36:40.491 "trtype": "$TEST_TRANSPORT", 00:36:40.491 "traddr": "$NVMF_FIRST_TARGET_IP", 00:36:40.491 "adrfam": "ipv4", 00:36:40.491 "trsvcid": "$NVMF_PORT", 00:36:40.491 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:36:40.491 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:36:40.491 "hdgst": ${hdgst:-false}, 00:36:40.491 "ddgst": ${ddgst:-false} 00:36:40.491 }, 00:36:40.491 "method": "bdev_nvme_attach_controller" 00:36:40.491 } 00:36:40.491 EOF 00:36:40.491 )") 00:36:40.491 04:33:54 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:36:40.491 04:33:54 -- target/dif.sh@54 -- # local file 00:36:40.491 04:33:54 -- target/dif.sh@56 -- # cat 00:36:40.491 04:33:54 -- nvmf/common.sh@542 -- # cat 00:36:40.491 04:33:54 -- common/autotest_common.sh@1324 -- # grep libasan 00:36:40.491 04:33:54 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/dsa-phy-autotest/spdk/build/fio/spdk_bdev 00:36:40.491 04:33:54 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:36:40.491 04:33:54 -- target/dif.sh@72 -- # (( file = 1 )) 00:36:40.491 04:33:54 -- target/dif.sh@72 -- # (( file <= files )) 00:36:40.491 04:33:54 -- target/dif.sh@73 -- # cat 00:36:40.491 04:33:54 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:36:40.491 04:33:54 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:36:40.491 { 00:36:40.491 "params": { 00:36:40.491 "name": "Nvme$subsystem", 00:36:40.491 "trtype": "$TEST_TRANSPORT", 00:36:40.491 "traddr": "$NVMF_FIRST_TARGET_IP", 00:36:40.491 "adrfam": "ipv4", 00:36:40.491 "trsvcid": "$NVMF_PORT", 00:36:40.491 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:36:40.491 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:36:40.491 "hdgst": ${hdgst:-false}, 00:36:40.491 "ddgst": ${ddgst:-false} 00:36:40.491 }, 00:36:40.491 "method": "bdev_nvme_attach_controller" 00:36:40.491 } 00:36:40.491 EOF 00:36:40.491 )") 00:36:40.491 04:33:54 -- target/dif.sh@72 -- # (( file++ )) 00:36:40.491 04:33:54 -- target/dif.sh@72 -- # (( file <= files )) 00:36:40.491 04:33:54 -- nvmf/common.sh@542 -- # cat 00:36:40.491 04:33:54 -- nvmf/common.sh@544 -- # jq . 00:36:40.491 04:33:54 -- nvmf/common.sh@545 -- # IFS=, 00:36:40.491 04:33:54 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:36:40.491 "params": { 00:36:40.491 "name": "Nvme0", 00:36:40.491 "trtype": "tcp", 00:36:40.491 "traddr": "10.0.0.2", 00:36:40.491 "adrfam": "ipv4", 00:36:40.491 "trsvcid": "4420", 00:36:40.491 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:36:40.491 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:36:40.491 "hdgst": false, 00:36:40.491 "ddgst": false 00:36:40.491 }, 00:36:40.491 "method": "bdev_nvme_attach_controller" 00:36:40.491 },{ 00:36:40.491 "params": { 00:36:40.491 "name": "Nvme1", 00:36:40.491 "trtype": "tcp", 00:36:40.491 "traddr": "10.0.0.2", 00:36:40.491 "adrfam": "ipv4", 00:36:40.491 "trsvcid": "4420", 00:36:40.491 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:36:40.491 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:36:40.491 "hdgst": false, 00:36:40.491 "ddgst": false 00:36:40.491 }, 00:36:40.491 "method": "bdev_nvme_attach_controller" 00:36:40.491 }' 00:36:40.491 04:33:54 -- common/autotest_common.sh@1324 -- # asan_lib=/usr/lib64/libasan.so.8 00:36:40.491 04:33:54 -- common/autotest_common.sh@1325 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:36:40.491 04:33:54 -- common/autotest_common.sh@1326 -- # break 00:36:40.491 04:33:54 -- common/autotest_common.sh@1331 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /var/jenkins/workspace/dsa-phy-autotest/spdk/build/fio/spdk_bdev' 00:36:40.491 04:33:54 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:36:40.749 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:36:40.749 ... 00:36:40.749 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:36:40.749 ... 00:36:40.749 fio-3.35 00:36:40.749 Starting 4 threads 00:36:41.008 EAL: No free 2048 kB hugepages reported on node 1 00:36:41.941 [2024-05-14 04:33:56.208873] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:36:41.941 [2024-05-14 04:33:56.208942] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:36:47.201 00:36:47.201 filename0: (groupid=0, jobs=1): err= 0: pid=88772: Tue May 14 04:34:01 2024 00:36:47.201 read: IOPS=2819, BW=22.0MiB/s (23.1MB/s)(110MiB/5002msec) 00:36:47.201 slat (nsec): min=4831, max=35688, avg=7010.03, stdev=2165.96 00:36:47.201 clat (usec): min=499, max=6182, avg=2816.81, stdev=598.70 00:36:47.201 lat (usec): min=506, max=6190, avg=2823.82, stdev=598.80 00:36:47.201 clat percentiles (usec): 00:36:47.201 | 1.00th=[ 1598], 5.00th=[ 1942], 10.00th=[ 2114], 20.00th=[ 2311], 00:36:47.201 | 30.00th=[ 2507], 40.00th=[ 2638], 50.00th=[ 2769], 60.00th=[ 2900], 00:36:47.201 | 70.00th=[ 3064], 80.00th=[ 3261], 90.00th=[ 3556], 95.00th=[ 3785], 00:36:47.201 | 99.00th=[ 4621], 99.50th=[ 5080], 99.90th=[ 5735], 99.95th=[ 5800], 00:36:47.201 | 99.99th=[ 6194] 00:36:47.201 bw ( KiB/s): min=20176, max=25808, per=27.49%, avg=22563.90, stdev=1674.92, samples=10 00:36:47.201 iops : min= 2522, max= 3226, avg=2820.40, stdev=209.35, samples=10 00:36:47.201 lat (usec) : 500=0.01%, 750=0.01%, 1000=0.13% 00:36:47.201 lat (msec) : 2=5.74%, 4=90.92%, 10=3.19% 00:36:47.201 cpu : usr=97.46%, sys=2.22%, ctx=8, majf=0, minf=1637 00:36:47.201 IO depths : 1=0.1%, 2=8.4%, 4=62.4%, 8=29.1%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:47.201 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:47.201 complete : 0=0.0%, 4=93.5%, 8=6.5%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:47.201 issued rwts: total=14104,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:47.201 latency : target=0, window=0, percentile=100.00%, depth=8 00:36:47.201 filename0: (groupid=0, jobs=1): err= 0: pid=88773: Tue May 14 04:34:01 2024 00:36:47.201 read: IOPS=2483, BW=19.4MiB/s (20.3MB/s)(97.0MiB/5001msec) 00:36:47.201 slat (usec): min=3, max=513, avg= 7.30, stdev= 5.23 00:36:47.201 clat (usec): min=618, max=6657, avg=3201.45, stdev=614.85 00:36:47.201 lat (usec): min=624, max=6664, avg=3208.75, stdev=614.93 00:36:47.201 clat percentiles (usec): 00:36:47.201 | 1.00th=[ 1975], 5.00th=[ 2343], 10.00th=[ 2507], 20.00th=[ 2737], 00:36:47.201 | 30.00th=[ 2868], 40.00th=[ 2999], 50.00th=[ 3130], 60.00th=[ 3261], 00:36:47.201 | 70.00th=[ 3425], 80.00th=[ 3621], 90.00th=[ 3982], 95.00th=[ 4359], 00:36:47.201 | 99.00th=[ 5014], 99.50th=[ 5342], 99.90th=[ 5932], 99.95th=[ 5997], 00:36:47.201 | 99.99th=[ 6652] 00:36:47.201 bw ( KiB/s): min=19216, max=20736, per=24.20%, avg=19865.90, stdev=469.02, samples=10 00:36:47.201 iops : min= 2402, max= 2592, avg=2483.00, stdev=58.84, samples=10 00:36:47.201 lat (usec) : 750=0.01%, 1000=0.04% 00:36:47.201 lat (msec) : 2=1.10%, 4=89.24%, 10=9.62% 00:36:47.201 cpu : usr=97.54%, sys=2.14%, ctx=7, majf=0, minf=1636 00:36:47.201 IO depths : 1=0.1%, 2=5.8%, 4=64.2%, 8=30.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:47.201 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:47.201 complete : 0=0.0%, 4=94.3%, 8=5.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:47.201 issued rwts: total=12418,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:47.201 latency : target=0, window=0, percentile=100.00%, depth=8 00:36:47.201 filename1: (groupid=0, jobs=1): err= 0: pid=88774: Tue May 14 04:34:01 2024 00:36:47.201 read: IOPS=2566, BW=20.1MiB/s (21.0MB/s)(100MiB/5002msec) 00:36:47.201 slat (nsec): min=3864, max=41983, avg=7210.06, stdev=2751.48 00:36:47.201 clat (usec): min=596, max=6431, avg=3095.94, stdev=639.35 00:36:47.201 lat (usec): min=602, max=6438, avg=3103.15, stdev=639.34 00:36:47.201 clat percentiles (usec): 00:36:47.201 | 1.00th=[ 1893], 5.00th=[ 2278], 10.00th=[ 2409], 20.00th=[ 2606], 00:36:47.201 | 30.00th=[ 2737], 40.00th=[ 2868], 50.00th=[ 2999], 60.00th=[ 3130], 00:36:47.201 | 70.00th=[ 3326], 80.00th=[ 3523], 90.00th=[ 3916], 95.00th=[ 4293], 00:36:47.201 | 99.00th=[ 5145], 99.50th=[ 5473], 99.90th=[ 6063], 99.95th=[ 6128], 00:36:47.201 | 99.99th=[ 6325] 00:36:47.201 bw ( KiB/s): min=18212, max=23392, per=25.02%, avg=20535.20, stdev=1553.41, samples=10 00:36:47.201 iops : min= 2276, max= 2924, avg=2566.80, stdev=194.34, samples=10 00:36:47.201 lat (usec) : 750=0.02%, 1000=0.02% 00:36:47.201 lat (msec) : 2=1.47%, 4=89.85%, 10=8.64% 00:36:47.201 cpu : usr=96.56%, sys=3.10%, ctx=7, majf=0, minf=1634 00:36:47.202 IO depths : 1=0.1%, 2=5.5%, 4=65.9%, 8=28.5%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:47.202 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:47.202 complete : 0=0.0%, 4=93.2%, 8=6.8%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:47.202 issued rwts: total=12838,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:47.202 latency : target=0, window=0, percentile=100.00%, depth=8 00:36:47.202 filename1: (groupid=0, jobs=1): err= 0: pid=88775: Tue May 14 04:34:01 2024 00:36:47.202 read: IOPS=2391, BW=18.7MiB/s (19.6MB/s)(93.4MiB/5001msec) 00:36:47.202 slat (nsec): min=3561, max=38456, avg=7183.71, stdev=2609.71 00:36:47.202 clat (usec): min=634, max=13349, avg=3326.01, stdev=729.60 00:36:47.202 lat (usec): min=641, max=13367, avg=3333.20, stdev=729.67 00:36:47.202 clat percentiles (usec): 00:36:47.202 | 1.00th=[ 2147], 5.00th=[ 2474], 10.00th=[ 2573], 20.00th=[ 2769], 00:36:47.202 | 30.00th=[ 2933], 40.00th=[ 3032], 50.00th=[ 3163], 60.00th=[ 3359], 00:36:47.202 | 70.00th=[ 3556], 80.00th=[ 3785], 90.00th=[ 4293], 95.00th=[ 4686], 00:36:47.202 | 99.00th=[ 5407], 99.50th=[ 5735], 99.90th=[ 6521], 99.95th=[13173], 00:36:47.202 | 99.99th=[13304] 00:36:47.202 bw ( KiB/s): min=17408, max=20512, per=23.31%, avg=19134.10, stdev=1018.62, samples=10 00:36:47.202 iops : min= 2176, max= 2564, avg=2391.60, stdev=127.44, samples=10 00:36:47.202 lat (usec) : 750=0.02%, 1000=0.02% 00:36:47.202 lat (msec) : 2=0.45%, 4=83.95%, 10=15.50%, 20=0.07% 00:36:47.202 cpu : usr=97.08%, sys=2.58%, ctx=7, majf=0, minf=1637 00:36:47.202 IO depths : 1=0.1%, 2=2.8%, 4=67.7%, 8=29.5%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:47.202 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:47.202 complete : 0=0.0%, 4=94.1%, 8=5.9%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:47.202 issued rwts: total=11960,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:47.202 latency : target=0, window=0, percentile=100.00%, depth=8 00:36:47.202 00:36:47.202 Run status group 0 (all jobs): 00:36:47.202 READ: bw=80.2MiB/s (84.0MB/s), 18.7MiB/s-22.0MiB/s (19.6MB/s-23.1MB/s), io=401MiB (420MB), run=5001-5002msec 00:36:47.460 ----------------------------------------------------- 00:36:47.460 Suppressions used: 00:36:47.460 count bytes template 00:36:47.460 6 52 /usr/src/fio/parse.c 00:36:47.460 1 8 libtcmalloc_minimal.so 00:36:47.460 1 904 libcrypto.so 00:36:47.460 ----------------------------------------------------- 00:36:47.460 00:36:47.460 04:34:01 -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:36:47.460 04:34:01 -- target/dif.sh@43 -- # local sub 00:36:47.460 04:34:01 -- target/dif.sh@45 -- # for sub in "$@" 00:36:47.460 04:34:01 -- target/dif.sh@46 -- # destroy_subsystem 0 00:36:47.460 04:34:01 -- target/dif.sh@36 -- # local sub_id=0 00:36:47.460 04:34:01 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:36:47.460 04:34:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:36:47.460 04:34:01 -- common/autotest_common.sh@10 -- # set +x 00:36:47.460 04:34:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:36:47.460 04:34:01 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:36:47.460 04:34:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:36:47.460 04:34:01 -- common/autotest_common.sh@10 -- # set +x 00:36:47.460 04:34:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:36:47.460 04:34:01 -- target/dif.sh@45 -- # for sub in "$@" 00:36:47.460 04:34:01 -- target/dif.sh@46 -- # destroy_subsystem 1 00:36:47.460 04:34:01 -- target/dif.sh@36 -- # local sub_id=1 00:36:47.460 04:34:01 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:36:47.460 04:34:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:36:47.460 04:34:01 -- common/autotest_common.sh@10 -- # set +x 00:36:47.460 04:34:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:36:47.460 04:34:01 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:36:47.460 04:34:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:36:47.460 04:34:01 -- common/autotest_common.sh@10 -- # set +x 00:36:47.460 04:34:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:36:47.460 00:36:47.460 real 0m26.066s 00:36:47.460 user 5m25.588s 00:36:47.460 sys 0m3.822s 00:36:47.460 04:34:01 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:36:47.460 04:34:01 -- common/autotest_common.sh@10 -- # set +x 00:36:47.460 ************************************ 00:36:47.460 END TEST fio_dif_rand_params 00:36:47.460 ************************************ 00:36:47.460 04:34:02 -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:36:47.460 04:34:02 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:36:47.460 04:34:02 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:36:47.460 04:34:02 -- common/autotest_common.sh@10 -- # set +x 00:36:47.460 ************************************ 00:36:47.460 START TEST fio_dif_digest 00:36:47.460 ************************************ 00:36:47.460 04:34:02 -- common/autotest_common.sh@1104 -- # fio_dif_digest 00:36:47.460 04:34:02 -- target/dif.sh@123 -- # local NULL_DIF 00:36:47.460 04:34:02 -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:36:47.460 04:34:02 -- target/dif.sh@125 -- # local hdgst ddgst 00:36:47.460 04:34:02 -- target/dif.sh@127 -- # NULL_DIF=3 00:36:47.460 04:34:02 -- target/dif.sh@127 -- # bs=128k,128k,128k 00:36:47.460 04:34:02 -- target/dif.sh@127 -- # numjobs=3 00:36:47.460 04:34:02 -- target/dif.sh@127 -- # iodepth=3 00:36:47.460 04:34:02 -- target/dif.sh@127 -- # runtime=10 00:36:47.460 04:34:02 -- target/dif.sh@128 -- # hdgst=true 00:36:47.460 04:34:02 -- target/dif.sh@128 -- # ddgst=true 00:36:47.460 04:34:02 -- target/dif.sh@130 -- # create_subsystems 0 00:36:47.460 04:34:02 -- target/dif.sh@28 -- # local sub 00:36:47.460 04:34:02 -- target/dif.sh@30 -- # for sub in "$@" 00:36:47.460 04:34:02 -- target/dif.sh@31 -- # create_subsystem 0 00:36:47.460 04:34:02 -- target/dif.sh@18 -- # local sub_id=0 00:36:47.460 04:34:02 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:36:47.461 04:34:02 -- common/autotest_common.sh@551 -- # xtrace_disable 00:36:47.461 04:34:02 -- common/autotest_common.sh@10 -- # set +x 00:36:47.461 bdev_null0 00:36:47.461 04:34:02 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:36:47.461 04:34:02 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:36:47.461 04:34:02 -- common/autotest_common.sh@551 -- # xtrace_disable 00:36:47.461 04:34:02 -- common/autotest_common.sh@10 -- # set +x 00:36:47.461 04:34:02 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:36:47.461 04:34:02 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:36:47.461 04:34:02 -- common/autotest_common.sh@551 -- # xtrace_disable 00:36:47.461 04:34:02 -- common/autotest_common.sh@10 -- # set +x 00:36:47.461 04:34:02 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:36:47.461 04:34:02 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:36:47.461 04:34:02 -- common/autotest_common.sh@551 -- # xtrace_disable 00:36:47.461 04:34:02 -- common/autotest_common.sh@10 -- # set +x 00:36:47.461 [2024-05-14 04:34:02.045642] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:36:47.721 04:34:02 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:36:47.721 04:34:02 -- target/dif.sh@131 -- # fio /dev/fd/62 00:36:47.721 04:34:02 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:36:47.721 04:34:02 -- common/autotest_common.sh@1335 -- # fio_plugin /var/jenkins/workspace/dsa-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:36:47.721 04:34:02 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:36:47.721 04:34:02 -- target/dif.sh@82 -- # gen_fio_conf 00:36:47.721 04:34:02 -- target/dif.sh@131 -- # create_json_sub_conf 0 00:36:47.721 04:34:02 -- common/autotest_common.sh@1318 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:36:47.721 04:34:02 -- target/dif.sh@54 -- # local file 00:36:47.721 04:34:02 -- common/autotest_common.sh@1318 -- # local sanitizers 00:36:47.721 04:34:02 -- target/dif.sh@56 -- # cat 00:36:47.721 04:34:02 -- common/autotest_common.sh@1319 -- # local plugin=/var/jenkins/workspace/dsa-phy-autotest/spdk/build/fio/spdk_bdev 00:36:47.721 04:34:02 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:36:47.721 04:34:02 -- common/autotest_common.sh@1320 -- # shift 00:36:47.721 04:34:02 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:36:47.721 04:34:02 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:36:47.721 04:34:02 -- nvmf/common.sh@520 -- # config=() 00:36:47.721 04:34:02 -- nvmf/common.sh@520 -- # local subsystem config 00:36:47.721 04:34:02 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:36:47.721 04:34:02 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:36:47.721 { 00:36:47.721 "params": { 00:36:47.721 "name": "Nvme$subsystem", 00:36:47.721 "trtype": "$TEST_TRANSPORT", 00:36:47.721 "traddr": "$NVMF_FIRST_TARGET_IP", 00:36:47.721 "adrfam": "ipv4", 00:36:47.721 "trsvcid": "$NVMF_PORT", 00:36:47.721 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:36:47.721 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:36:47.721 "hdgst": ${hdgst:-false}, 00:36:47.721 "ddgst": ${ddgst:-false} 00:36:47.721 }, 00:36:47.721 "method": "bdev_nvme_attach_controller" 00:36:47.721 } 00:36:47.721 EOF 00:36:47.721 )") 00:36:47.721 04:34:02 -- nvmf/common.sh@542 -- # cat 00:36:47.721 04:34:02 -- common/autotest_common.sh@1324 -- # grep libasan 00:36:47.721 04:34:02 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:36:47.721 04:34:02 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/dsa-phy-autotest/spdk/build/fio/spdk_bdev 00:36:47.721 04:34:02 -- target/dif.sh@72 -- # (( file = 1 )) 00:36:47.721 04:34:02 -- target/dif.sh@72 -- # (( file <= files )) 00:36:47.721 04:34:02 -- nvmf/common.sh@544 -- # jq . 00:36:47.721 04:34:02 -- nvmf/common.sh@545 -- # IFS=, 00:36:47.721 04:34:02 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:36:47.721 "params": { 00:36:47.721 "name": "Nvme0", 00:36:47.721 "trtype": "tcp", 00:36:47.721 "traddr": "10.0.0.2", 00:36:47.721 "adrfam": "ipv4", 00:36:47.721 "trsvcid": "4420", 00:36:47.721 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:36:47.721 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:36:47.721 "hdgst": true, 00:36:47.721 "ddgst": true 00:36:47.721 }, 00:36:47.721 "method": "bdev_nvme_attach_controller" 00:36:47.721 }' 00:36:47.721 04:34:02 -- common/autotest_common.sh@1324 -- # asan_lib=/usr/lib64/libasan.so.8 00:36:47.721 04:34:02 -- common/autotest_common.sh@1325 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:36:47.721 04:34:02 -- common/autotest_common.sh@1326 -- # break 00:36:47.721 04:34:02 -- common/autotest_common.sh@1331 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /var/jenkins/workspace/dsa-phy-autotest/spdk/build/fio/spdk_bdev' 00:36:47.721 04:34:02 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:36:47.983 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:36:47.983 ... 00:36:47.983 fio-3.35 00:36:47.983 Starting 3 threads 00:36:47.983 EAL: No free 2048 kB hugepages reported on node 1 00:36:48.550 [2024-05-14 04:34:03.058633] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:36:48.550 [2024-05-14 04:34:03.058699] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:37:00.733 00:37:00.733 filename0: (groupid=0, jobs=1): err= 0: pid=90430: Tue May 14 04:34:13 2024 00:37:00.733 read: IOPS=290, BW=36.3MiB/s (38.1MB/s)(365MiB/10046msec) 00:37:00.733 slat (nsec): min=4818, max=37098, avg=10758.17, stdev=2608.18 00:37:00.733 clat (usec): min=6085, max=51760, avg=10297.43, stdev=1326.96 00:37:00.733 lat (usec): min=6091, max=51774, avg=10308.19, stdev=1326.90 00:37:00.733 clat percentiles (usec): 00:37:00.733 | 1.00th=[ 8586], 5.00th=[ 9110], 10.00th=[ 9372], 20.00th=[ 9634], 00:37:00.733 | 30.00th=[ 9896], 40.00th=[10028], 50.00th=[10159], 60.00th=[10290], 00:37:00.733 | 70.00th=[10552], 80.00th=[10814], 90.00th=[11338], 95.00th=[11731], 00:37:00.733 | 99.00th=[12911], 99.50th=[13304], 99.90th=[15926], 99.95th=[49021], 00:37:00.733 | 99.99th=[51643] 00:37:00.733 bw ( KiB/s): min=35072, max=38912, per=32.85%, avg=37337.60, stdev=990.70, samples=20 00:37:00.733 iops : min= 274, max= 304, avg=291.70, stdev= 7.74, samples=20 00:37:00.733 lat (msec) : 10=37.03%, 20=62.90%, 50=0.03%, 100=0.03% 00:37:00.733 cpu : usr=96.70%, sys=3.00%, ctx=19, majf=0, minf=1637 00:37:00.733 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:37:00.733 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:00.733 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:00.733 issued rwts: total=2919,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:00.733 latency : target=0, window=0, percentile=100.00%, depth=3 00:37:00.733 filename0: (groupid=0, jobs=1): err= 0: pid=90431: Tue May 14 04:34:13 2024 00:37:00.733 read: IOPS=297, BW=37.2MiB/s (39.0MB/s)(374MiB/10044msec) 00:37:00.733 slat (nsec): min=6291, max=37795, avg=12046.07, stdev=3729.39 00:37:00.733 clat (usec): min=7620, max=50643, avg=10055.94, stdev=1226.97 00:37:00.733 lat (usec): min=7635, max=50653, avg=10067.98, stdev=1226.80 00:37:00.733 clat percentiles (usec): 00:37:00.733 | 1.00th=[ 8455], 5.00th=[ 8979], 10.00th=[ 9110], 20.00th=[ 9503], 00:37:00.733 | 30.00th=[ 9634], 40.00th=[ 9896], 50.00th=[10028], 60.00th=[10159], 00:37:00.733 | 70.00th=[10290], 80.00th=[10552], 90.00th=[10945], 95.00th=[11207], 00:37:00.733 | 99.00th=[12256], 99.50th=[12780], 99.90th=[18744], 99.95th=[43779], 00:37:00.733 | 99.99th=[50594] 00:37:00.733 bw ( KiB/s): min=36608, max=39168, per=33.63%, avg=38220.80, stdev=659.77, samples=20 00:37:00.733 iops : min= 286, max= 306, avg=298.60, stdev= 5.15, samples=20 00:37:00.733 lat (msec) : 10=50.20%, 20=49.73%, 50=0.03%, 100=0.03% 00:37:00.733 cpu : usr=97.56%, sys=2.16%, ctx=14, majf=0, minf=1637 00:37:00.733 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:37:00.733 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:00.733 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:00.733 issued rwts: total=2988,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:00.733 latency : target=0, window=0, percentile=100.00%, depth=3 00:37:00.733 filename0: (groupid=0, jobs=1): err= 0: pid=90432: Tue May 14 04:34:13 2024 00:37:00.733 read: IOPS=299, BW=37.5MiB/s (39.3MB/s)(377MiB/10044msec) 00:37:00.733 slat (nsec): min=4974, max=39561, avg=9186.23, stdev=2525.61 00:37:00.733 clat (usec): min=7891, max=50328, avg=9976.39, stdev=1270.44 00:37:00.734 lat (usec): min=7898, max=50337, avg=9985.58, stdev=1270.55 00:37:00.734 clat percentiles (usec): 00:37:00.734 | 1.00th=[ 8455], 5.00th=[ 8848], 10.00th=[ 9110], 20.00th=[ 9372], 00:37:00.734 | 30.00th=[ 9634], 40.00th=[ 9765], 50.00th=[ 9896], 60.00th=[10028], 00:37:00.734 | 70.00th=[10159], 80.00th=[10421], 90.00th=[10814], 95.00th=[11338], 00:37:00.734 | 99.00th=[12125], 99.50th=[12649], 99.90th=[19530], 99.95th=[46924], 00:37:00.734 | 99.99th=[50070] 00:37:00.734 bw ( KiB/s): min=36937, max=39680, per=33.91%, avg=38544.45, stdev=816.53, samples=20 00:37:00.734 iops : min= 288, max= 310, avg=301.10, stdev= 6.44, samples=20 00:37:00.734 lat (msec) : 10=56.32%, 20=43.61%, 50=0.03%, 100=0.03% 00:37:00.734 cpu : usr=97.51%, sys=2.22%, ctx=14, majf=0, minf=1635 00:37:00.734 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:37:00.734 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:00.734 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:00.734 issued rwts: total=3013,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:00.734 latency : target=0, window=0, percentile=100.00%, depth=3 00:37:00.734 00:37:00.734 Run status group 0 (all jobs): 00:37:00.734 READ: bw=111MiB/s (116MB/s), 36.3MiB/s-37.5MiB/s (38.1MB/s-39.3MB/s), io=1115MiB (1169MB), run=10044-10046msec 00:37:00.734 ----------------------------------------------------- 00:37:00.734 Suppressions used: 00:37:00.734 count bytes template 00:37:00.734 5 44 /usr/src/fio/parse.c 00:37:00.734 1 8 libtcmalloc_minimal.so 00:37:00.734 1 904 libcrypto.so 00:37:00.734 ----------------------------------------------------- 00:37:00.734 00:37:00.734 04:34:13 -- target/dif.sh@132 -- # destroy_subsystems 0 00:37:00.734 04:34:13 -- target/dif.sh@43 -- # local sub 00:37:00.734 04:34:13 -- target/dif.sh@45 -- # for sub in "$@" 00:37:00.734 04:34:13 -- target/dif.sh@46 -- # destroy_subsystem 0 00:37:00.734 04:34:13 -- target/dif.sh@36 -- # local sub_id=0 00:37:00.734 04:34:13 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:37:00.734 04:34:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:37:00.734 04:34:13 -- common/autotest_common.sh@10 -- # set +x 00:37:00.734 04:34:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:37:00.734 04:34:13 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:37:00.734 04:34:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:37:00.734 04:34:13 -- common/autotest_common.sh@10 -- # set +x 00:37:00.734 04:34:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:37:00.734 00:37:00.734 real 0m11.929s 00:37:00.734 user 0m41.187s 00:37:00.734 sys 0m1.175s 00:37:00.734 04:34:13 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:37:00.734 04:34:13 -- common/autotest_common.sh@10 -- # set +x 00:37:00.734 ************************************ 00:37:00.734 END TEST fio_dif_digest 00:37:00.734 ************************************ 00:37:00.734 04:34:13 -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:37:00.734 04:34:13 -- target/dif.sh@147 -- # nvmftestfini 00:37:00.734 04:34:13 -- nvmf/common.sh@476 -- # nvmfcleanup 00:37:00.734 04:34:13 -- nvmf/common.sh@116 -- # sync 00:37:00.734 04:34:13 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:37:00.734 04:34:13 -- nvmf/common.sh@119 -- # set +e 00:37:00.734 04:34:13 -- nvmf/common.sh@120 -- # for i in {1..20} 00:37:00.734 04:34:13 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:37:00.734 rmmod nvme_tcp 00:37:00.734 rmmod nvme_fabrics 00:37:00.734 rmmod nvme_keyring 00:37:00.734 04:34:14 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:37:00.734 04:34:14 -- nvmf/common.sh@123 -- # set -e 00:37:00.734 04:34:14 -- nvmf/common.sh@124 -- # return 0 00:37:00.734 04:34:14 -- nvmf/common.sh@477 -- # '[' -n 78961 ']' 00:37:00.734 04:34:14 -- nvmf/common.sh@478 -- # killprocess 78961 00:37:00.734 04:34:14 -- common/autotest_common.sh@926 -- # '[' -z 78961 ']' 00:37:00.734 04:34:14 -- common/autotest_common.sh@930 -- # kill -0 78961 00:37:00.734 04:34:14 -- common/autotest_common.sh@931 -- # uname 00:37:00.734 04:34:14 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:37:00.734 04:34:14 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 78961 00:37:00.734 04:34:14 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:37:00.734 04:34:14 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:37:00.734 04:34:14 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 78961' 00:37:00.734 killing process with pid 78961 00:37:00.734 04:34:14 -- common/autotest_common.sh@945 -- # kill 78961 00:37:00.734 04:34:14 -- common/autotest_common.sh@950 -- # wait 78961 00:37:00.734 04:34:14 -- nvmf/common.sh@480 -- # '[' iso == iso ']' 00:37:00.734 04:34:14 -- nvmf/common.sh@481 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/setup.sh reset 00:37:02.638 Waiting for block devices as requested 00:37:02.638 0000:c9:00.0 (8086 0a54): vfio-pci -> nvme 00:37:02.638 0000:74:02.0 (8086 0cfe): vfio-pci -> idxd 00:37:02.897 0000:f1:02.0 (8086 0cfe): vfio-pci -> idxd 00:37:02.897 0000:79:02.0 (8086 0cfe): vfio-pci -> idxd 00:37:02.897 0000:6f:01.0 (8086 0b25): vfio-pci -> idxd 00:37:02.897 0000:6f:02.0 (8086 0cfe): vfio-pci -> idxd 00:37:03.155 0000:f6:01.0 (8086 0b25): vfio-pci -> idxd 00:37:03.155 0000:f6:02.0 (8086 0cfe): vfio-pci -> idxd 00:37:03.155 0000:74:01.0 (8086 0b25): vfio-pci -> idxd 00:37:03.155 0000:6a:02.0 (8086 0cfe): vfio-pci -> idxd 00:37:03.414 0000:79:01.0 (8086 0b25): vfio-pci -> idxd 00:37:03.414 0000:ec:01.0 (8086 0b25): vfio-pci -> idxd 00:37:03.414 0000:6a:01.0 (8086 0b25): vfio-pci -> idxd 00:37:03.414 0000:ca:00.0 (8086 0a54): vfio-pci -> nvme 00:37:03.673 0000:ec:02.0 (8086 0cfe): vfio-pci -> idxd 00:37:03.673 0000:e7:01.0 (8086 0b25): vfio-pci -> idxd 00:37:03.673 0000:e7:02.0 (8086 0cfe): vfio-pci -> idxd 00:37:03.673 0000:f1:01.0 (8086 0b25): vfio-pci -> idxd 00:37:03.932 04:34:18 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:37:03.932 04:34:18 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:37:03.932 04:34:18 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:37:03.932 04:34:18 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:37:03.932 04:34:18 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:03.932 04:34:18 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:37:03.932 04:34:18 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:05.839 04:34:20 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:37:05.839 00:37:05.839 real 1m17.731s 00:37:05.839 user 8m11.761s 00:37:05.839 sys 0m16.556s 00:37:05.839 04:34:20 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:37:05.839 04:34:20 -- common/autotest_common.sh@10 -- # set +x 00:37:05.839 ************************************ 00:37:05.839 END TEST nvmf_dif 00:37:05.839 ************************************ 00:37:05.839 04:34:20 -- spdk/autotest.sh@301 -- # run_test nvmf_abort_qd_sizes /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:37:05.839 04:34:20 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:37:05.839 04:34:20 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:37:05.839 04:34:20 -- common/autotest_common.sh@10 -- # set +x 00:37:05.839 ************************************ 00:37:05.839 START TEST nvmf_abort_qd_sizes 00:37:05.839 ************************************ 00:37:05.839 04:34:20 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:37:06.098 * Looking for test storage... 00:37:06.098 * Found test storage at /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target 00:37:06.098 04:34:20 -- target/abort_qd_sizes.sh@14 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/common.sh 00:37:06.098 04:34:20 -- nvmf/common.sh@7 -- # uname -s 00:37:06.098 04:34:20 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:37:06.098 04:34:20 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:37:06.098 04:34:20 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:37:06.098 04:34:20 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:37:06.098 04:34:20 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:37:06.098 04:34:20 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:37:06.098 04:34:20 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:37:06.098 04:34:20 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:37:06.098 04:34:20 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:37:06.098 04:34:20 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:37:06.098 04:34:20 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80ef6226-405e-ee11-906e-a4bf01973fda 00:37:06.098 04:34:20 -- nvmf/common.sh@18 -- # NVME_HOSTID=80ef6226-405e-ee11-906e-a4bf01973fda 00:37:06.098 04:34:20 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:37:06.098 04:34:20 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:37:06.098 04:34:20 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:37:06.098 04:34:20 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/common.sh 00:37:06.098 04:34:20 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:37:06.098 04:34:20 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:37:06.098 04:34:20 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:37:06.098 04:34:20 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:06.098 04:34:20 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:06.098 04:34:20 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:06.098 04:34:20 -- paths/export.sh@5 -- # export PATH 00:37:06.098 04:34:20 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:06.098 04:34:20 -- nvmf/common.sh@46 -- # : 0 00:37:06.098 04:34:20 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:37:06.098 04:34:20 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:37:06.098 04:34:20 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:37:06.098 04:34:20 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:37:06.098 04:34:20 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:37:06.098 04:34:20 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:37:06.098 04:34:20 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:37:06.098 04:34:20 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:37:06.098 04:34:20 -- target/abort_qd_sizes.sh@73 -- # nvmftestinit 00:37:06.098 04:34:20 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:37:06.098 04:34:20 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:37:06.098 04:34:20 -- nvmf/common.sh@436 -- # prepare_net_devs 00:37:06.098 04:34:20 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:37:06.098 04:34:20 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:37:06.098 04:34:20 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:06.098 04:34:20 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:37:06.098 04:34:20 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:06.098 04:34:20 -- nvmf/common.sh@402 -- # [[ phy-fallback != virt ]] 00:37:06.098 04:34:20 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:37:06.098 04:34:20 -- nvmf/common.sh@284 -- # xtrace_disable 00:37:06.098 04:34:20 -- common/autotest_common.sh@10 -- # set +x 00:37:11.367 04:34:25 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:37:11.367 04:34:25 -- nvmf/common.sh@290 -- # pci_devs=() 00:37:11.367 04:34:25 -- nvmf/common.sh@290 -- # local -a pci_devs 00:37:11.367 04:34:25 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:37:11.367 04:34:25 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:37:11.367 04:34:25 -- nvmf/common.sh@292 -- # pci_drivers=() 00:37:11.367 04:34:25 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:37:11.367 04:34:25 -- nvmf/common.sh@294 -- # net_devs=() 00:37:11.367 04:34:25 -- nvmf/common.sh@294 -- # local -ga net_devs 00:37:11.367 04:34:25 -- nvmf/common.sh@295 -- # e810=() 00:37:11.367 04:34:25 -- nvmf/common.sh@295 -- # local -ga e810 00:37:11.367 04:34:25 -- nvmf/common.sh@296 -- # x722=() 00:37:11.367 04:34:25 -- nvmf/common.sh@296 -- # local -ga x722 00:37:11.367 04:34:25 -- nvmf/common.sh@297 -- # mlx=() 00:37:11.367 04:34:25 -- nvmf/common.sh@297 -- # local -ga mlx 00:37:11.367 04:34:25 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:37:11.367 04:34:25 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:37:11.367 04:34:25 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:37:11.367 04:34:25 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:37:11.367 04:34:25 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:37:11.367 04:34:25 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:37:11.367 04:34:25 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:37:11.367 04:34:25 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:37:11.367 04:34:25 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:37:11.367 04:34:25 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:37:11.367 04:34:25 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:37:11.367 04:34:25 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:37:11.367 04:34:25 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:37:11.367 04:34:25 -- nvmf/common.sh@326 -- # [[ '' == mlx5 ]] 00:37:11.367 04:34:25 -- nvmf/common.sh@328 -- # [[ '' == e810 ]] 00:37:11.367 04:34:25 -- nvmf/common.sh@330 -- # [[ '' == x722 ]] 00:37:11.367 04:34:25 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:37:11.367 04:34:25 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:37:11.367 04:34:25 -- nvmf/common.sh@340 -- # echo 'Found 0000:27:00.0 (0x8086 - 0x159b)' 00:37:11.367 Found 0000:27:00.0 (0x8086 - 0x159b) 00:37:11.367 04:34:25 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:37:11.367 04:34:25 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:37:11.367 04:34:25 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:37:11.367 04:34:25 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:37:11.367 04:34:25 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:37:11.367 04:34:25 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:37:11.367 04:34:25 -- nvmf/common.sh@340 -- # echo 'Found 0000:27:00.1 (0x8086 - 0x159b)' 00:37:11.367 Found 0000:27:00.1 (0x8086 - 0x159b) 00:37:11.367 04:34:25 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:37:11.367 04:34:25 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:37:11.367 04:34:25 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:37:11.367 04:34:25 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:37:11.367 04:34:25 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:37:11.367 04:34:25 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:37:11.367 04:34:25 -- nvmf/common.sh@371 -- # [[ '' == e810 ]] 00:37:11.367 04:34:25 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:37:11.367 04:34:25 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:37:11.367 04:34:25 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:37:11.367 04:34:25 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:37:11.367 04:34:25 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:27:00.0: cvl_0_0' 00:37:11.367 Found net devices under 0000:27:00.0: cvl_0_0 00:37:11.367 04:34:25 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:37:11.367 04:34:25 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:37:11.367 04:34:25 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:37:11.367 04:34:25 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:37:11.367 04:34:25 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:37:11.367 04:34:25 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:27:00.1: cvl_0_1' 00:37:11.367 Found net devices under 0000:27:00.1: cvl_0_1 00:37:11.367 04:34:25 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:37:11.367 04:34:25 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:37:11.367 04:34:25 -- nvmf/common.sh@402 -- # is_hw=yes 00:37:11.367 04:34:25 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:37:11.367 04:34:25 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:37:11.367 04:34:25 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:37:11.367 04:34:25 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:37:11.367 04:34:25 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:37:11.367 04:34:25 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:37:11.367 04:34:25 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:37:11.367 04:34:25 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:37:11.367 04:34:25 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:37:11.367 04:34:25 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:37:11.367 04:34:25 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:37:11.367 04:34:25 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:37:11.367 04:34:25 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:37:11.367 04:34:25 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:37:11.367 04:34:25 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:37:11.367 04:34:25 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:37:11.367 04:34:25 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:37:11.367 04:34:25 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:37:11.367 04:34:25 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:37:11.367 04:34:25 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:37:11.367 04:34:25 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:37:11.367 04:34:25 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:37:11.367 04:34:25 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:37:11.367 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:37:11.367 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.149 ms 00:37:11.367 00:37:11.367 --- 10.0.0.2 ping statistics --- 00:37:11.367 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:11.367 rtt min/avg/max/mdev = 0.149/0.149/0.149/0.000 ms 00:37:11.367 04:34:25 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:37:11.367 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:37:11.367 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.053 ms 00:37:11.367 00:37:11.367 --- 10.0.0.1 ping statistics --- 00:37:11.367 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:11.367 rtt min/avg/max/mdev = 0.053/0.053/0.053/0.000 ms 00:37:11.367 04:34:25 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:37:11.367 04:34:25 -- nvmf/common.sh@410 -- # return 0 00:37:11.367 04:34:25 -- nvmf/common.sh@438 -- # '[' iso == iso ']' 00:37:11.367 04:34:25 -- nvmf/common.sh@439 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/setup.sh 00:37:14.655 0000:74:02.0 (8086 0cfe): idxd -> vfio-pci 00:37:14.655 0000:f1:02.0 (8086 0cfe): idxd -> vfio-pci 00:37:14.655 0000:79:02.0 (8086 0cfe): idxd -> vfio-pci 00:37:14.655 0000:6f:01.0 (8086 0b25): idxd -> vfio-pci 00:37:14.655 0000:6f:02.0 (8086 0cfe): idxd -> vfio-pci 00:37:14.655 0000:f6:01.0 (8086 0b25): idxd -> vfio-pci 00:37:14.655 0000:f6:02.0 (8086 0cfe): idxd -> vfio-pci 00:37:14.655 0000:74:01.0 (8086 0b25): idxd -> vfio-pci 00:37:14.655 0000:6a:02.0 (8086 0cfe): idxd -> vfio-pci 00:37:14.655 0000:79:01.0 (8086 0b25): idxd -> vfio-pci 00:37:14.655 0000:ec:01.0 (8086 0b25): idxd -> vfio-pci 00:37:14.655 0000:6a:01.0 (8086 0b25): idxd -> vfio-pci 00:37:14.655 0000:ec:02.0 (8086 0cfe): idxd -> vfio-pci 00:37:14.655 0000:e7:01.0 (8086 0b25): idxd -> vfio-pci 00:37:14.655 0000:e7:02.0 (8086 0cfe): idxd -> vfio-pci 00:37:14.655 0000:f1:01.0 (8086 0b25): idxd -> vfio-pci 00:37:16.037 0000:c9:00.0 (8086 0a54): nvme -> vfio-pci 00:37:16.604 0000:ca:00.0 (8086 0a54): nvme -> vfio-pci 00:37:16.604 04:34:31 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:37:16.604 04:34:31 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:37:16.604 04:34:31 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:37:16.604 04:34:31 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:37:16.604 04:34:31 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:37:16.604 04:34:31 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:37:16.604 04:34:31 -- target/abort_qd_sizes.sh@74 -- # nvmfappstart -m 0xf 00:37:16.604 04:34:31 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:37:16.604 04:34:31 -- common/autotest_common.sh@712 -- # xtrace_disable 00:37:16.604 04:34:31 -- common/autotest_common.sh@10 -- # set +x 00:37:16.604 04:34:31 -- nvmf/common.sh@469 -- # nvmfpid=99789 00:37:16.604 04:34:31 -- nvmf/common.sh@470 -- # waitforlisten 99789 00:37:16.604 04:34:31 -- common/autotest_common.sh@819 -- # '[' -z 99789 ']' 00:37:16.605 04:34:31 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:37:16.605 04:34:31 -- common/autotest_common.sh@824 -- # local max_retries=100 00:37:16.605 04:34:31 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:37:16.605 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:37:16.605 04:34:31 -- common/autotest_common.sh@828 -- # xtrace_disable 00:37:16.605 04:34:31 -- common/autotest_common.sh@10 -- # set +x 00:37:16.605 04:34:31 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:37:16.605 [2024-05-14 04:34:31.147737] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:37:16.605 [2024-05-14 04:34:31.147845] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:37:16.863 EAL: No free 2048 kB hugepages reported on node 1 00:37:16.863 [2024-05-14 04:34:31.268953] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:37:16.863 [2024-05-14 04:34:31.368347] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:37:16.864 [2024-05-14 04:34:31.368509] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:37:16.864 [2024-05-14 04:34:31.368521] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:37:16.864 [2024-05-14 04:34:31.368530] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:37:16.864 [2024-05-14 04:34:31.368597] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:37:16.864 [2024-05-14 04:34:31.368621] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:37:16.864 [2024-05-14 04:34:31.368731] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:37:16.864 [2024-05-14 04:34:31.368741] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:37:17.430 04:34:31 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:37:17.430 04:34:31 -- common/autotest_common.sh@852 -- # return 0 00:37:17.430 04:34:31 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:37:17.430 04:34:31 -- common/autotest_common.sh@718 -- # xtrace_disable 00:37:17.430 04:34:31 -- common/autotest_common.sh@10 -- # set +x 00:37:17.430 04:34:31 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:37:17.430 04:34:31 -- target/abort_qd_sizes.sh@76 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:37:17.430 04:34:31 -- target/abort_qd_sizes.sh@78 -- # mapfile -t nvmes 00:37:17.430 04:34:31 -- target/abort_qd_sizes.sh@78 -- # nvme_in_userspace 00:37:17.430 04:34:31 -- scripts/common.sh@311 -- # local bdf bdfs 00:37:17.430 04:34:31 -- scripts/common.sh@312 -- # local nvmes 00:37:17.430 04:34:31 -- scripts/common.sh@314 -- # [[ -n 0000:c9:00.0 0000:ca:00.0 ]] 00:37:17.430 04:34:31 -- scripts/common.sh@315 -- # nvmes=(${pci_bus_cache["0x010802"]}) 00:37:17.430 04:34:31 -- scripts/common.sh@320 -- # for bdf in "${nvmes[@]}" 00:37:17.430 04:34:31 -- scripts/common.sh@321 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:c9:00.0 ]] 00:37:17.430 04:34:31 -- scripts/common.sh@322 -- # uname -s 00:37:17.430 04:34:31 -- scripts/common.sh@322 -- # [[ Linux == FreeBSD ]] 00:37:17.430 04:34:31 -- scripts/common.sh@325 -- # bdfs+=("$bdf") 00:37:17.430 04:34:31 -- scripts/common.sh@320 -- # for bdf in "${nvmes[@]}" 00:37:17.430 04:34:31 -- scripts/common.sh@321 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:ca:00.0 ]] 00:37:17.430 04:34:31 -- scripts/common.sh@322 -- # uname -s 00:37:17.430 04:34:31 -- scripts/common.sh@322 -- # [[ Linux == FreeBSD ]] 00:37:17.430 04:34:31 -- scripts/common.sh@325 -- # bdfs+=("$bdf") 00:37:17.430 04:34:31 -- scripts/common.sh@327 -- # (( 2 )) 00:37:17.430 04:34:31 -- scripts/common.sh@328 -- # printf '%s\n' 0000:c9:00.0 0000:ca:00.0 00:37:17.430 04:34:31 -- target/abort_qd_sizes.sh@79 -- # (( 2 > 0 )) 00:37:17.430 04:34:31 -- target/abort_qd_sizes.sh@81 -- # nvme=0000:c9:00.0 00:37:17.430 04:34:31 -- target/abort_qd_sizes.sh@83 -- # run_test spdk_target_abort spdk_target 00:37:17.430 04:34:31 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:37:17.430 04:34:31 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:37:17.430 04:34:31 -- common/autotest_common.sh@10 -- # set +x 00:37:17.430 ************************************ 00:37:17.430 START TEST spdk_target_abort 00:37:17.430 ************************************ 00:37:17.430 04:34:31 -- common/autotest_common.sh@1104 -- # spdk_target 00:37:17.430 04:34:31 -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:37:17.430 04:34:31 -- target/abort_qd_sizes.sh@44 -- # local subnqn=nqn.2016-06.io.spdk:spdk_target 00:37:17.430 04:34:31 -- target/abort_qd_sizes.sh@46 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:c9:00.0 -b spdk_target 00:37:17.430 04:34:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:37:17.430 04:34:31 -- common/autotest_common.sh@10 -- # set +x 00:37:20.750 spdk_targetn1 00:37:20.750 04:34:34 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:37:20.750 04:34:34 -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:37:20.750 04:34:34 -- common/autotest_common.sh@551 -- # xtrace_disable 00:37:20.750 04:34:34 -- common/autotest_common.sh@10 -- # set +x 00:37:20.750 [2024-05-14 04:34:34.736510] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:37:20.750 04:34:34 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:37:20.750 04:34:34 -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:spdk_target -a -s SPDKISFASTANDAWESOME 00:37:20.750 04:34:34 -- common/autotest_common.sh@551 -- # xtrace_disable 00:37:20.750 04:34:34 -- common/autotest_common.sh@10 -- # set +x 00:37:20.750 04:34:34 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:37:20.750 04:34:34 -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:spdk_target spdk_targetn1 00:37:20.750 04:34:34 -- common/autotest_common.sh@551 -- # xtrace_disable 00:37:20.750 04:34:34 -- common/autotest_common.sh@10 -- # set +x 00:37:20.750 04:34:34 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:37:20.750 04:34:34 -- target/abort_qd_sizes.sh@51 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:spdk_target -t tcp -a 10.0.0.2 -s 4420 00:37:20.750 04:34:34 -- common/autotest_common.sh@551 -- # xtrace_disable 00:37:20.750 04:34:34 -- common/autotest_common.sh@10 -- # set +x 00:37:20.750 [2024-05-14 04:34:34.770669] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:37:20.750 04:34:34 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:37:20.750 04:34:34 -- target/abort_qd_sizes.sh@53 -- # rabort tcp IPv4 10.0.0.2 4420 nqn.2016-06.io.spdk:spdk_target 00:37:20.750 04:34:34 -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:37:20.750 04:34:34 -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:37:20.750 04:34:34 -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.2 00:37:20.750 04:34:34 -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:37:20.750 04:34:34 -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:spdk_target 00:37:20.750 04:34:34 -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:37:20.750 04:34:34 -- target/abort_qd_sizes.sh@24 -- # local target r 00:37:20.750 04:34:34 -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:37:20.750 04:34:34 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:37:20.750 04:34:34 -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:37:20.751 04:34:34 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:37:20.751 04:34:34 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:37:20.751 04:34:34 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:37:20.751 04:34:34 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2' 00:37:20.751 04:34:34 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:37:20.751 04:34:34 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:37:20.751 04:34:34 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:37:20.751 04:34:34 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:spdk_target' 00:37:20.751 04:34:34 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:37:20.751 04:34:34 -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:spdk_target' 00:37:20.751 EAL: No free 2048 kB hugepages reported on node 1 00:37:24.029 Initializing NVMe Controllers 00:37:24.029 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:spdk_target 00:37:24.029 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) NSID 1 with lcore 0 00:37:24.029 Initialization complete. Launching workers. 00:37:24.029 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) NSID 1 I/O completed: 16295, failed: 0 00:37:24.029 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) abort submitted 1665, failed to submit 14630 00:37:24.029 success 778, unsuccess 887, failed 0 00:37:24.029 04:34:38 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:37:24.030 04:34:38 -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:spdk_target' 00:37:24.030 EAL: No free 2048 kB hugepages reported on node 1 00:37:27.309 [2024-05-14 04:34:41.169225] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(5) to be set 00:37:27.309 [2024-05-14 04:34:41.169275] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(5) to be set 00:37:27.309 [2024-05-14 04:34:41.169284] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(5) to be set 00:37:27.309 [2024-05-14 04:34:41.169291] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(5) to be set 00:37:27.310 [2024-05-14 04:34:41.169299] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(5) to be set 00:37:27.310 [2024-05-14 04:34:41.169306] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(5) to be set 00:37:27.310 [2024-05-14 04:34:41.169313] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(5) to be set 00:37:27.310 [2024-05-14 04:34:41.169321] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(5) to be set 00:37:27.310 [2024-05-14 04:34:41.169328] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(5) to be set 00:37:27.310 [2024-05-14 04:34:41.169335] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(5) to be set 00:37:27.310 [2024-05-14 04:34:41.169342] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(5) to be set 00:37:27.310 [2024-05-14 04:34:41.169350] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(5) to be set 00:37:27.310 Initializing NVMe Controllers 00:37:27.310 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:spdk_target 00:37:27.310 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) NSID 1 with lcore 0 00:37:27.310 Initialization complete. Launching workers. 00:37:27.310 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) NSID 1 I/O completed: 8450, failed: 0 00:37:27.310 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) abort submitted 1225, failed to submit 7225 00:37:27.310 success 334, unsuccess 891, failed 0 00:37:27.310 04:34:41 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:37:27.310 04:34:41 -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:spdk_target' 00:37:27.310 EAL: No free 2048 kB hugepages reported on node 1 00:37:30.592 Initializing NVMe Controllers 00:37:30.592 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:spdk_target 00:37:30.592 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) NSID 1 with lcore 0 00:37:30.592 Initialization complete. Launching workers. 00:37:30.592 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) NSID 1 I/O completed: 41276, failed: 0 00:37:30.592 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) abort submitted 2625, failed to submit 38651 00:37:30.592 success 610, unsuccess 2015, failed 0 00:37:30.592 04:34:44 -- target/abort_qd_sizes.sh@55 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:spdk_target 00:37:30.592 04:34:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:37:30.592 04:34:44 -- common/autotest_common.sh@10 -- # set +x 00:37:30.593 04:34:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:37:30.593 04:34:44 -- target/abort_qd_sizes.sh@56 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:37:30.593 04:34:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:37:30.593 04:34:44 -- common/autotest_common.sh@10 -- # set +x 00:37:32.495 04:34:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:37:32.495 04:34:46 -- target/abort_qd_sizes.sh@62 -- # killprocess 99789 00:37:32.495 04:34:46 -- common/autotest_common.sh@926 -- # '[' -z 99789 ']' 00:37:32.495 04:34:46 -- common/autotest_common.sh@930 -- # kill -0 99789 00:37:32.495 04:34:46 -- common/autotest_common.sh@931 -- # uname 00:37:32.495 04:34:46 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:37:32.495 04:34:46 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 99789 00:37:32.495 04:34:46 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:37:32.495 04:34:46 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:37:32.495 04:34:46 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 99789' 00:37:32.495 killing process with pid 99789 00:37:32.495 04:34:46 -- common/autotest_common.sh@945 -- # kill 99789 00:37:32.495 04:34:46 -- common/autotest_common.sh@950 -- # wait 99789 00:37:32.754 00:37:32.754 real 0m15.434s 00:37:32.754 user 1m1.616s 00:37:32.754 sys 0m1.260s 00:37:32.754 04:34:47 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:37:32.754 04:34:47 -- common/autotest_common.sh@10 -- # set +x 00:37:32.754 ************************************ 00:37:32.754 END TEST spdk_target_abort 00:37:32.754 ************************************ 00:37:33.013 04:34:47 -- target/abort_qd_sizes.sh@84 -- # run_test kernel_target_abort kernel_target 00:37:33.013 04:34:47 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:37:33.013 04:34:47 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:37:33.013 04:34:47 -- common/autotest_common.sh@10 -- # set +x 00:37:33.013 ************************************ 00:37:33.013 START TEST kernel_target_abort 00:37:33.013 ************************************ 00:37:33.013 04:34:47 -- common/autotest_common.sh@1104 -- # kernel_target 00:37:33.013 04:34:47 -- target/abort_qd_sizes.sh@66 -- # local name=kernel_target 00:37:33.013 04:34:47 -- target/abort_qd_sizes.sh@68 -- # configure_kernel_target kernel_target 00:37:33.013 04:34:47 -- nvmf/common.sh@621 -- # kernel_name=kernel_target 00:37:33.013 04:34:47 -- nvmf/common.sh@622 -- # nvmet=/sys/kernel/config/nvmet 00:37:33.013 04:34:47 -- nvmf/common.sh@623 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/kernel_target 00:37:33.013 04:34:47 -- nvmf/common.sh@624 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/kernel_target/namespaces/1 00:37:33.013 04:34:47 -- nvmf/common.sh@625 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:37:33.013 04:34:47 -- nvmf/common.sh@627 -- # local block nvme 00:37:33.013 04:34:47 -- nvmf/common.sh@629 -- # [[ ! -e /sys/module/nvmet ]] 00:37:33.013 04:34:47 -- nvmf/common.sh@630 -- # modprobe nvmet 00:37:33.013 04:34:47 -- nvmf/common.sh@633 -- # [[ -e /sys/kernel/config/nvmet ]] 00:37:33.013 04:34:47 -- nvmf/common.sh@635 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/setup.sh reset 00:37:35.552 Waiting for block devices as requested 00:37:35.552 0000:c9:00.0 (8086 0a54): vfio-pci -> nvme 00:37:35.552 0000:74:02.0 (8086 0cfe): vfio-pci -> idxd 00:37:35.552 0000:f1:02.0 (8086 0cfe): vfio-pci -> idxd 00:37:35.552 0000:79:02.0 (8086 0cfe): vfio-pci -> idxd 00:37:35.810 0000:6f:01.0 (8086 0b25): vfio-pci -> idxd 00:37:35.810 0000:6f:02.0 (8086 0cfe): vfio-pci -> idxd 00:37:35.810 0000:f6:01.0 (8086 0b25): vfio-pci -> idxd 00:37:35.810 0000:f6:02.0 (8086 0cfe): vfio-pci -> idxd 00:37:36.068 0000:74:01.0 (8086 0b25): vfio-pci -> idxd 00:37:36.068 0000:6a:02.0 (8086 0cfe): vfio-pci -> idxd 00:37:36.068 0000:79:01.0 (8086 0b25): vfio-pci -> idxd 00:37:36.068 0000:ec:01.0 (8086 0b25): vfio-pci -> idxd 00:37:36.328 0000:6a:01.0 (8086 0b25): vfio-pci -> idxd 00:37:36.328 0000:ca:00.0 (8086 0a54): vfio-pci -> nvme 00:37:36.328 0000:ec:02.0 (8086 0cfe): vfio-pci -> idxd 00:37:36.328 0000:e7:01.0 (8086 0b25): vfio-pci -> idxd 00:37:36.588 0000:e7:02.0 (8086 0cfe): vfio-pci -> idxd 00:37:36.588 0000:f1:01.0 (8086 0b25): vfio-pci -> idxd 00:37:37.525 04:34:51 -- nvmf/common.sh@638 -- # for block in /sys/block/nvme* 00:37:37.525 04:34:51 -- nvmf/common.sh@639 -- # [[ -e /sys/block/nvme0n1 ]] 00:37:37.525 04:34:51 -- nvmf/common.sh@640 -- # block_in_use nvme0n1 00:37:37.525 04:34:51 -- scripts/common.sh@380 -- # local block=nvme0n1 pt 00:37:37.525 04:34:51 -- scripts/common.sh@389 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:37:37.525 No valid GPT data, bailing 00:37:37.525 04:34:51 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:37:37.525 04:34:51 -- scripts/common.sh@393 -- # pt= 00:37:37.525 04:34:51 -- scripts/common.sh@394 -- # return 1 00:37:37.525 04:34:51 -- nvmf/common.sh@640 -- # nvme=/dev/nvme0n1 00:37:37.525 04:34:51 -- nvmf/common.sh@638 -- # for block in /sys/block/nvme* 00:37:37.525 04:34:51 -- nvmf/common.sh@639 -- # [[ -e /sys/block/nvme1n1 ]] 00:37:37.525 04:34:51 -- nvmf/common.sh@640 -- # block_in_use nvme1n1 00:37:37.525 04:34:51 -- scripts/common.sh@380 -- # local block=nvme1n1 pt 00:37:37.525 04:34:51 -- scripts/common.sh@389 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/spdk-gpt.py nvme1n1 00:37:37.525 No valid GPT data, bailing 00:37:37.525 04:34:51 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:37:37.525 04:34:51 -- scripts/common.sh@393 -- # pt= 00:37:37.525 04:34:51 -- scripts/common.sh@394 -- # return 1 00:37:37.525 04:34:51 -- nvmf/common.sh@640 -- # nvme=/dev/nvme1n1 00:37:37.525 04:34:51 -- nvmf/common.sh@643 -- # [[ -b /dev/nvme1n1 ]] 00:37:37.525 04:34:51 -- nvmf/common.sh@645 -- # mkdir /sys/kernel/config/nvmet/subsystems/kernel_target 00:37:37.525 04:34:51 -- nvmf/common.sh@646 -- # mkdir /sys/kernel/config/nvmet/subsystems/kernel_target/namespaces/1 00:37:37.525 04:34:51 -- nvmf/common.sh@647 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:37:37.525 04:34:51 -- nvmf/common.sh@652 -- # echo SPDK-kernel_target 00:37:37.525 04:34:51 -- nvmf/common.sh@654 -- # echo 1 00:37:37.525 04:34:51 -- nvmf/common.sh@655 -- # echo /dev/nvme1n1 00:37:37.525 04:34:51 -- nvmf/common.sh@656 -- # echo 1 00:37:37.525 04:34:51 -- nvmf/common.sh@662 -- # echo 10.0.0.1 00:37:37.525 04:34:51 -- nvmf/common.sh@663 -- # echo tcp 00:37:37.525 04:34:51 -- nvmf/common.sh@664 -- # echo 4420 00:37:37.525 04:34:51 -- nvmf/common.sh@665 -- # echo ipv4 00:37:37.525 04:34:51 -- nvmf/common.sh@668 -- # ln -s /sys/kernel/config/nvmet/subsystems/kernel_target /sys/kernel/config/nvmet/ports/1/subsystems/ 00:37:37.525 04:34:51 -- nvmf/common.sh@671 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80ef6226-405e-ee11-906e-a4bf01973fda --hostid=80ef6226-405e-ee11-906e-a4bf01973fda -a 10.0.0.1 -t tcp -s 4420 00:37:37.525 00:37:37.525 Discovery Log Number of Records 2, Generation counter 2 00:37:37.525 =====Discovery Log Entry 0====== 00:37:37.525 trtype: tcp 00:37:37.525 adrfam: ipv4 00:37:37.525 subtype: current discovery subsystem 00:37:37.525 treq: not specified, sq flow control disable supported 00:37:37.525 portid: 1 00:37:37.525 trsvcid: 4420 00:37:37.525 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:37:37.525 traddr: 10.0.0.1 00:37:37.525 eflags: none 00:37:37.525 sectype: none 00:37:37.525 =====Discovery Log Entry 1====== 00:37:37.525 trtype: tcp 00:37:37.525 adrfam: ipv4 00:37:37.525 subtype: nvme subsystem 00:37:37.525 treq: not specified, sq flow control disable supported 00:37:37.525 portid: 1 00:37:37.525 trsvcid: 4420 00:37:37.525 subnqn: kernel_target 00:37:37.525 traddr: 10.0.0.1 00:37:37.525 eflags: none 00:37:37.525 sectype: none 00:37:37.525 04:34:52 -- target/abort_qd_sizes.sh@69 -- # rabort tcp IPv4 10.0.0.1 4420 kernel_target 00:37:37.525 04:34:52 -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:37:37.525 04:34:52 -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:37:37.525 04:34:52 -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:37:37.525 04:34:52 -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:37:37.525 04:34:52 -- target/abort_qd_sizes.sh@21 -- # local subnqn=kernel_target 00:37:37.525 04:34:52 -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:37:37.525 04:34:52 -- target/abort_qd_sizes.sh@24 -- # local target r 00:37:37.525 04:34:52 -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:37:37.525 04:34:52 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:37:37.525 04:34:52 -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:37:37.525 04:34:52 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:37:37.525 04:34:52 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:37:37.525 04:34:52 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:37:37.525 04:34:52 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:37:37.525 04:34:52 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:37:37.525 04:34:52 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:37:37.525 04:34:52 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:37:37.525 04:34:52 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:kernel_target' 00:37:37.525 04:34:52 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:37:37.525 04:34:52 -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:kernel_target' 00:37:37.525 EAL: No free 2048 kB hugepages reported on node 1 00:37:40.826 Initializing NVMe Controllers 00:37:40.826 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: kernel_target 00:37:40.826 Associating TCP (addr:10.0.0.1 subnqn:kernel_target) NSID 1 with lcore 0 00:37:40.826 Initialization complete. Launching workers. 00:37:40.826 NS: TCP (addr:10.0.0.1 subnqn:kernel_target) NSID 1 I/O completed: 64899, failed: 0 00:37:40.826 CTRLR: TCP (addr:10.0.0.1 subnqn:kernel_target) abort submitted 64899, failed to submit 0 00:37:40.826 success 0, unsuccess 64899, failed 0 00:37:40.826 04:34:55 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:37:40.826 04:34:55 -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:kernel_target' 00:37:40.826 EAL: No free 2048 kB hugepages reported on node 1 00:37:44.109 Initializing NVMe Controllers 00:37:44.109 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: kernel_target 00:37:44.109 Associating TCP (addr:10.0.0.1 subnqn:kernel_target) NSID 1 with lcore 0 00:37:44.109 Initialization complete. Launching workers. 00:37:44.109 NS: TCP (addr:10.0.0.1 subnqn:kernel_target) NSID 1 I/O completed: 115643, failed: 0 00:37:44.109 CTRLR: TCP (addr:10.0.0.1 subnqn:kernel_target) abort submitted 29070, failed to submit 86573 00:37:44.109 success 0, unsuccess 29070, failed 0 00:37:44.109 04:34:58 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:37:44.109 04:34:58 -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:kernel_target' 00:37:44.109 EAL: No free 2048 kB hugepages reported on node 1 00:37:47.434 Initializing NVMe Controllers 00:37:47.434 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: kernel_target 00:37:47.434 Associating TCP (addr:10.0.0.1 subnqn:kernel_target) NSID 1 with lcore 0 00:37:47.434 Initialization complete. Launching workers. 00:37:47.434 NS: TCP (addr:10.0.0.1 subnqn:kernel_target) NSID 1 I/O completed: 111441, failed: 0 00:37:47.434 CTRLR: TCP (addr:10.0.0.1 subnqn:kernel_target) abort submitted 27862, failed to submit 83579 00:37:47.434 success 0, unsuccess 27862, failed 0 00:37:47.434 04:35:01 -- target/abort_qd_sizes.sh@70 -- # clean_kernel_target 00:37:47.434 04:35:01 -- nvmf/common.sh@675 -- # [[ -e /sys/kernel/config/nvmet/subsystems/kernel_target ]] 00:37:47.434 04:35:01 -- nvmf/common.sh@677 -- # echo 0 00:37:47.434 04:35:01 -- nvmf/common.sh@679 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/kernel_target 00:37:47.434 04:35:01 -- nvmf/common.sh@680 -- # rmdir /sys/kernel/config/nvmet/subsystems/kernel_target/namespaces/1 00:37:47.434 04:35:01 -- nvmf/common.sh@681 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:37:47.434 04:35:01 -- nvmf/common.sh@682 -- # rmdir /sys/kernel/config/nvmet/subsystems/kernel_target 00:37:47.434 04:35:01 -- nvmf/common.sh@684 -- # modules=(/sys/module/nvmet/holders/*) 00:37:47.434 04:35:01 -- nvmf/common.sh@686 -- # modprobe -r nvmet_tcp nvmet 00:37:47.434 00:37:47.434 real 0m14.033s 00:37:47.434 user 0m6.523s 00:37:47.434 sys 0m3.664s 00:37:47.434 04:35:01 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:37:47.434 04:35:01 -- common/autotest_common.sh@10 -- # set +x 00:37:47.434 ************************************ 00:37:47.434 END TEST kernel_target_abort 00:37:47.434 ************************************ 00:37:47.434 04:35:01 -- target/abort_qd_sizes.sh@86 -- # trap - SIGINT SIGTERM EXIT 00:37:47.434 04:35:01 -- target/abort_qd_sizes.sh@87 -- # nvmftestfini 00:37:47.434 04:35:01 -- nvmf/common.sh@476 -- # nvmfcleanup 00:37:47.434 04:35:01 -- nvmf/common.sh@116 -- # sync 00:37:47.434 04:35:01 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:37:47.434 04:35:01 -- nvmf/common.sh@119 -- # set +e 00:37:47.434 04:35:01 -- nvmf/common.sh@120 -- # for i in {1..20} 00:37:47.434 04:35:01 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:37:47.434 rmmod nvme_tcp 00:37:47.434 rmmod nvme_fabrics 00:37:47.434 rmmod nvme_keyring 00:37:47.434 04:35:01 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:37:47.434 04:35:01 -- nvmf/common.sh@123 -- # set -e 00:37:47.434 04:35:01 -- nvmf/common.sh@124 -- # return 0 00:37:47.434 04:35:01 -- nvmf/common.sh@477 -- # '[' -n 99789 ']' 00:37:47.434 04:35:01 -- nvmf/common.sh@478 -- # killprocess 99789 00:37:47.434 04:35:01 -- common/autotest_common.sh@926 -- # '[' -z 99789 ']' 00:37:47.434 04:35:01 -- common/autotest_common.sh@930 -- # kill -0 99789 00:37:47.434 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/common/autotest_common.sh: line 930: kill: (99789) - No such process 00:37:47.434 04:35:01 -- common/autotest_common.sh@953 -- # echo 'Process with pid 99789 is not found' 00:37:47.434 Process with pid 99789 is not found 00:37:47.434 04:35:01 -- nvmf/common.sh@480 -- # '[' iso == iso ']' 00:37:47.434 04:35:01 -- nvmf/common.sh@481 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/setup.sh reset 00:37:49.339 0000:c9:00.0 (8086 0a54): Already using the nvme driver 00:37:49.598 0000:74:02.0 (8086 0cfe): Already using the idxd driver 00:37:49.598 0000:f1:02.0 (8086 0cfe): Already using the idxd driver 00:37:49.598 0000:79:02.0 (8086 0cfe): Already using the idxd driver 00:37:49.598 0000:6f:01.0 (8086 0b25): Already using the idxd driver 00:37:49.598 0000:6f:02.0 (8086 0cfe): Already using the idxd driver 00:37:49.598 0000:f6:01.0 (8086 0b25): Already using the idxd driver 00:37:49.598 0000:f6:02.0 (8086 0cfe): Already using the idxd driver 00:37:49.598 0000:74:01.0 (8086 0b25): Already using the idxd driver 00:37:49.598 0000:6a:02.0 (8086 0cfe): Already using the idxd driver 00:37:49.598 0000:79:01.0 (8086 0b25): Already using the idxd driver 00:37:49.598 0000:ec:01.0 (8086 0b25): Already using the idxd driver 00:37:49.598 0000:6a:01.0 (8086 0b25): Already using the idxd driver 00:37:49.598 0000:ca:00.0 (8086 0a54): Already using the nvme driver 00:37:49.599 0000:ec:02.0 (8086 0cfe): Already using the idxd driver 00:37:49.858 0000:e7:01.0 (8086 0b25): Already using the idxd driver 00:37:49.858 0000:e7:02.0 (8086 0cfe): Already using the idxd driver 00:37:49.858 0000:f1:01.0 (8086 0b25): Already using the idxd driver 00:37:49.858 04:35:04 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:37:49.858 04:35:04 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:37:49.858 04:35:04 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:37:49.858 04:35:04 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:37:49.858 04:35:04 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:49.858 04:35:04 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:37:49.858 04:35:04 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:52.388 04:35:06 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:37:52.388 00:37:52.388 real 0m45.992s 00:37:52.388 user 1m11.908s 00:37:52.388 sys 0m12.652s 00:37:52.388 04:35:06 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:37:52.388 04:35:06 -- common/autotest_common.sh@10 -- # set +x 00:37:52.388 ************************************ 00:37:52.388 END TEST nvmf_abort_qd_sizes 00:37:52.388 ************************************ 00:37:52.388 04:35:06 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:37:52.388 04:35:06 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:37:52.388 04:35:06 -- spdk/autotest.sh@319 -- # '[' 0 -eq 1 ']' 00:37:52.388 04:35:06 -- spdk/autotest.sh@324 -- # '[' 0 -eq 1 ']' 00:37:52.388 04:35:06 -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']' 00:37:52.388 04:35:06 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:37:52.388 04:35:06 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:37:52.388 04:35:06 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:37:52.388 04:35:06 -- spdk/autotest.sh@350 -- # '[' 0 -eq 1 ']' 00:37:52.388 04:35:06 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:37:52.388 04:35:06 -- spdk/autotest.sh@359 -- # '[' 0 -eq 1 ']' 00:37:52.388 04:35:06 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:37:52.388 04:35:06 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:37:52.388 04:35:06 -- spdk/autotest.sh@374 -- # [[ 0 -eq 1 ]] 00:37:52.388 04:35:06 -- spdk/autotest.sh@378 -- # [[ 0 -eq 1 ]] 00:37:52.388 04:35:06 -- spdk/autotest.sh@383 -- # trap - SIGINT SIGTERM EXIT 00:37:52.388 04:35:06 -- spdk/autotest.sh@385 -- # timing_enter post_cleanup 00:37:52.388 04:35:06 -- common/autotest_common.sh@712 -- # xtrace_disable 00:37:52.388 04:35:06 -- common/autotest_common.sh@10 -- # set +x 00:37:52.388 04:35:06 -- spdk/autotest.sh@386 -- # autotest_cleanup 00:37:52.388 04:35:06 -- common/autotest_common.sh@1371 -- # local autotest_es=0 00:37:52.388 04:35:06 -- common/autotest_common.sh@1372 -- # xtrace_disable 00:37:52.388 04:35:06 -- common/autotest_common.sh@10 -- # set +x 00:37:57.657 INFO: APP EXITING 00:37:57.657 INFO: killing all VMs 00:37:57.657 INFO: killing vhost app 00:37:57.657 INFO: EXIT DONE 00:38:00.185 0000:c9:00.0 (8086 0a54): Already using the nvme driver 00:38:00.185 0000:74:02.0 (8086 0cfe): Already using the idxd driver 00:38:00.185 0000:f1:02.0 (8086 0cfe): Already using the idxd driver 00:38:00.185 0000:79:02.0 (8086 0cfe): Already using the idxd driver 00:38:00.185 0000:6f:01.0 (8086 0b25): Already using the idxd driver 00:38:00.185 0000:6f:02.0 (8086 0cfe): Already using the idxd driver 00:38:00.185 0000:f6:01.0 (8086 0b25): Already using the idxd driver 00:38:00.185 0000:f6:02.0 (8086 0cfe): Already using the idxd driver 00:38:00.185 0000:74:01.0 (8086 0b25): Already using the idxd driver 00:38:00.185 0000:6a:02.0 (8086 0cfe): Already using the idxd driver 00:38:00.185 0000:79:01.0 (8086 0b25): Already using the idxd driver 00:38:00.185 0000:ec:01.0 (8086 0b25): Already using the idxd driver 00:38:00.185 0000:6a:01.0 (8086 0b25): Already using the idxd driver 00:38:00.185 0000:ca:00.0 (8086 0a54): Already using the nvme driver 00:38:00.185 0000:ec:02.0 (8086 0cfe): Already using the idxd driver 00:38:00.185 0000:e7:01.0 (8086 0b25): Already using the idxd driver 00:38:00.185 0000:e7:02.0 (8086 0cfe): Already using the idxd driver 00:38:00.185 0000:f1:01.0 (8086 0b25): Already using the idxd driver 00:38:02.717 Cleaning 00:38:02.717 Removing: /var/run/dpdk/spdk0/config 00:38:02.717 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:38:02.717 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:38:02.717 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:38:02.717 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:38:02.717 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-0 00:38:02.717 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-1 00:38:02.717 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-2 00:38:02.717 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-3 00:38:02.717 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:38:02.717 Removing: /var/run/dpdk/spdk0/hugepage_info 00:38:02.717 Removing: /var/run/dpdk/spdk1/config 00:38:02.717 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:38:02.717 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:38:02.717 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:38:02.717 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:38:02.717 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-0 00:38:02.717 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-1 00:38:02.717 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-2 00:38:02.717 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-3 00:38:02.717 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:38:02.717 Removing: /var/run/dpdk/spdk1/hugepage_info 00:38:02.717 Removing: /var/run/dpdk/spdk2/config 00:38:02.717 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:38:02.717 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:38:02.717 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:38:02.717 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:38:02.717 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-0 00:38:02.717 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-1 00:38:02.717 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-2 00:38:02.717 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-3 00:38:02.717 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:38:02.717 Removing: /var/run/dpdk/spdk2/hugepage_info 00:38:02.718 Removing: /var/run/dpdk/spdk3/config 00:38:02.718 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:38:02.718 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:38:02.718 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:38:02.718 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:38:02.718 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-0 00:38:02.718 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-1 00:38:02.718 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-2 00:38:02.718 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-3 00:38:02.718 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:38:02.718 Removing: /var/run/dpdk/spdk3/hugepage_info 00:38:02.718 Removing: /var/run/dpdk/spdk4/config 00:38:02.718 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:38:02.718 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:38:02.718 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:38:02.718 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:38:02.718 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-0 00:38:02.718 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-1 00:38:02.718 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-2 00:38:02.718 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-3 00:38:02.718 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:38:02.718 Removing: /var/run/dpdk/spdk4/hugepage_info 00:38:02.718 Removing: /dev/shm/nvmf_trace.0 00:38:02.718 Removing: /dev/shm/spdk_tgt_trace.pid3790002 00:38:02.718 Removing: /var/run/dpdk/spdk0 00:38:02.718 Removing: /var/run/dpdk/spdk1 00:38:02.718 Removing: /var/run/dpdk/spdk2 00:38:02.718 Removing: /var/run/dpdk/spdk3 00:38:02.718 Removing: /var/run/dpdk/spdk4 00:38:02.718 Removing: /var/run/dpdk/spdk_pid100588 00:38:02.718 Removing: /var/run/dpdk/spdk_pid101193 00:38:02.975 Removing: /var/run/dpdk/spdk_pid101795 00:38:02.975 Removing: /var/run/dpdk/spdk_pid105137 00:38:02.975 Removing: /var/run/dpdk/spdk_pid105705 00:38:02.975 Removing: /var/run/dpdk/spdk_pid106296 00:38:02.975 Removing: /var/run/dpdk/spdk_pid13682 00:38:02.975 Removing: /var/run/dpdk/spdk_pid14424 00:38:02.975 Removing: /var/run/dpdk/spdk_pid16916 00:38:02.975 Removing: /var/run/dpdk/spdk_pid18139 00:38:02.975 Removing: /var/run/dpdk/spdk_pid2536 00:38:02.975 Removing: /var/run/dpdk/spdk_pid26598 00:38:02.975 Removing: /var/run/dpdk/spdk_pid30471 00:38:02.975 Removing: /var/run/dpdk/spdk_pid36744 00:38:02.975 Removing: /var/run/dpdk/spdk_pid3784548 00:38:02.975 Removing: /var/run/dpdk/spdk_pid3786847 00:38:02.975 Removing: /var/run/dpdk/spdk_pid3790002 00:38:02.975 Removing: /var/run/dpdk/spdk_pid3790816 00:38:02.975 Removing: /var/run/dpdk/spdk_pid3793677 00:38:02.975 Removing: /var/run/dpdk/spdk_pid3795997 00:38:02.975 Removing: /var/run/dpdk/spdk_pid3796580 00:38:02.975 Removing: /var/run/dpdk/spdk_pid3797106 00:38:02.975 Removing: /var/run/dpdk/spdk_pid3797603 00:38:02.975 Removing: /var/run/dpdk/spdk_pid3797964 00:38:02.975 Removing: /var/run/dpdk/spdk_pid3798300 00:38:02.975 Removing: /var/run/dpdk/spdk_pid3798617 00:38:02.975 Removing: /var/run/dpdk/spdk_pid3798961 00:38:02.975 Removing: /var/run/dpdk/spdk_pid3799783 00:38:02.975 Removing: /var/run/dpdk/spdk_pid3803171 00:38:02.975 Removing: /var/run/dpdk/spdk_pid3803510 00:38:02.975 Removing: /var/run/dpdk/spdk_pid3803869 00:38:02.975 Removing: /var/run/dpdk/spdk_pid3804142 00:38:02.975 Removing: /var/run/dpdk/spdk_pid3805083 00:38:02.975 Removing: /var/run/dpdk/spdk_pid3805096 00:38:02.975 Removing: /var/run/dpdk/spdk_pid3806038 00:38:02.975 Removing: /var/run/dpdk/spdk_pid3806219 00:38:02.975 Removing: /var/run/dpdk/spdk_pid3806655 00:38:02.975 Removing: /var/run/dpdk/spdk_pid3806691 00:38:02.975 Removing: /var/run/dpdk/spdk_pid3807024 00:38:02.975 Removing: /var/run/dpdk/spdk_pid3807321 00:38:02.975 Removing: /var/run/dpdk/spdk_pid3808038 00:38:02.975 Removing: /var/run/dpdk/spdk_pid3808355 00:38:02.975 Removing: /var/run/dpdk/spdk_pid3808763 00:38:02.975 Removing: /var/run/dpdk/spdk_pid3811721 00:38:02.975 Removing: /var/run/dpdk/spdk_pid3813537 00:38:02.975 Removing: /var/run/dpdk/spdk_pid3815171 00:38:02.975 Removing: /var/run/dpdk/spdk_pid3817236 00:38:02.975 Removing: /var/run/dpdk/spdk_pid3819074 00:38:02.975 Removing: /var/run/dpdk/spdk_pid3820925 00:38:02.975 Removing: /var/run/dpdk/spdk_pid3823033 00:38:02.975 Removing: /var/run/dpdk/spdk_pid3824853 00:38:02.975 Removing: /var/run/dpdk/spdk_pid3826743 00:38:02.975 Removing: /var/run/dpdk/spdk_pid3828792 00:38:02.975 Removing: /var/run/dpdk/spdk_pid3830635 00:38:02.975 Removing: /var/run/dpdk/spdk_pid3832509 00:38:02.975 Removing: /var/run/dpdk/spdk_pid3834572 00:38:02.975 Removing: /var/run/dpdk/spdk_pid3836409 00:38:02.975 Removing: /var/run/dpdk/spdk_pid3838513 00:38:02.975 Removing: /var/run/dpdk/spdk_pid3840334 00:38:02.975 Removing: /var/run/dpdk/spdk_pid3842337 00:38:02.975 Removing: /var/run/dpdk/spdk_pid3844260 00:38:02.975 Removing: /var/run/dpdk/spdk_pid3846474 00:38:02.975 Removing: /var/run/dpdk/spdk_pid3848621 00:38:02.975 Removing: /var/run/dpdk/spdk_pid3850572 00:38:02.975 Removing: /var/run/dpdk/spdk_pid3852392 00:38:02.975 Removing: /var/run/dpdk/spdk_pid3854506 00:38:02.975 Removing: /var/run/dpdk/spdk_pid3856327 00:38:02.975 Removing: /var/run/dpdk/spdk_pid3858180 00:38:02.975 Removing: /var/run/dpdk/spdk_pid3860252 00:38:02.975 Removing: /var/run/dpdk/spdk_pid3862101 00:38:02.975 Removing: /var/run/dpdk/spdk_pid3863924 00:38:02.975 Removing: /var/run/dpdk/spdk_pid3866036 00:38:02.975 Removing: /var/run/dpdk/spdk_pid3867856 00:38:02.975 Removing: /var/run/dpdk/spdk_pid3869970 00:38:02.975 Removing: /var/run/dpdk/spdk_pid3871794 00:38:02.975 Removing: /var/run/dpdk/spdk_pid3873753 00:38:02.975 Removing: /var/run/dpdk/spdk_pid3875728 00:38:02.975 Removing: /var/run/dpdk/spdk_pid3877556 00:38:02.975 Removing: /var/run/dpdk/spdk_pid3879611 00:38:02.975 Removing: /var/run/dpdk/spdk_pid3881503 00:38:02.975 Removing: /var/run/dpdk/spdk_pid3883871 00:38:02.975 Removing: /var/run/dpdk/spdk_pid3885966 00:38:02.975 Removing: /var/run/dpdk/spdk_pid3887781 00:38:02.975 Removing: /var/run/dpdk/spdk_pid3889624 00:38:02.975 Removing: /var/run/dpdk/spdk_pid3891594 00:38:02.975 Removing: /var/run/dpdk/spdk_pid3893548 00:38:02.975 Removing: /var/run/dpdk/spdk_pid3895722 00:38:02.975 Removing: /var/run/dpdk/spdk_pid3898246 00:38:02.975 Removing: /var/run/dpdk/spdk_pid3902560 00:38:02.975 Removing: /var/run/dpdk/spdk_pid3996300 00:38:02.975 Removing: /var/run/dpdk/spdk_pid4001141 00:38:02.975 Removing: /var/run/dpdk/spdk_pid4011885 00:38:02.975 Removing: /var/run/dpdk/spdk_pid4017940 00:38:02.975 Removing: /var/run/dpdk/spdk_pid4022583 00:38:02.975 Removing: /var/run/dpdk/spdk_pid4023702 00:38:03.234 Removing: /var/run/dpdk/spdk_pid4028739 00:38:03.234 Removing: /var/run/dpdk/spdk_pid4029079 00:38:03.234 Removing: /var/run/dpdk/spdk_pid4033944 00:38:03.234 Removing: /var/run/dpdk/spdk_pid4040603 00:38:03.235 Removing: /var/run/dpdk/spdk_pid4043412 00:38:03.235 Removing: /var/run/dpdk/spdk_pid4055157 00:38:03.235 Removing: /var/run/dpdk/spdk_pid4065496 00:38:03.235 Removing: /var/run/dpdk/spdk_pid4067610 00:38:03.235 Removing: /var/run/dpdk/spdk_pid4068825 00:38:03.235 Removing: /var/run/dpdk/spdk_pid4089183 00:38:03.235 Removing: /var/run/dpdk/spdk_pid4093731 00:38:03.235 Removing: /var/run/dpdk/spdk_pid4098877 00:38:03.235 Removing: /var/run/dpdk/spdk_pid4100837 00:38:03.235 Removing: /var/run/dpdk/spdk_pid4103105 00:38:03.235 Removing: /var/run/dpdk/spdk_pid4103409 00:38:03.235 Removing: /var/run/dpdk/spdk_pid4103612 00:38:03.235 Removing: /var/run/dpdk/spdk_pid4103857 00:38:03.235 Removing: /var/run/dpdk/spdk_pid4104695 00:38:03.235 Removing: /var/run/dpdk/spdk_pid4106822 00:38:03.235 Removing: /var/run/dpdk/spdk_pid4108097 00:38:03.235 Removing: /var/run/dpdk/spdk_pid4108744 00:38:03.235 Removing: /var/run/dpdk/spdk_pid4115114 00:38:03.235 Removing: /var/run/dpdk/spdk_pid4121725 00:38:03.235 Removing: /var/run/dpdk/spdk_pid4127449 00:38:03.235 Removing: /var/run/dpdk/spdk_pid4166543 00:38:03.235 Removing: /var/run/dpdk/spdk_pid4171244 00:38:03.235 Removing: /var/run/dpdk/spdk_pid4180520 00:38:03.235 Removing: /var/run/dpdk/spdk_pid4180683 00:38:03.235 Removing: /var/run/dpdk/spdk_pid4186483 00:38:03.235 Removing: /var/run/dpdk/spdk_pid4186782 00:38:03.235 Removing: /var/run/dpdk/spdk_pid4187039 00:38:03.235 Removing: /var/run/dpdk/spdk_pid4187480 00:38:03.235 Removing: /var/run/dpdk/spdk_pid4187636 00:38:03.235 Removing: /var/run/dpdk/spdk_pid4190052 00:38:03.235 Removing: /var/run/dpdk/spdk_pid4191958 00:38:03.235 Removing: /var/run/dpdk/spdk_pid4194040 00:38:03.235 Removing: /var/run/dpdk/spdk_pid43116 00:38:03.235 Removing: /var/run/dpdk/spdk_pid4625 00:38:03.235 Removing: /var/run/dpdk/spdk_pid50050 00:38:03.235 Removing: /var/run/dpdk/spdk_pid52463 00:38:03.235 Removing: /var/run/dpdk/spdk_pid54588 00:38:03.235 Removing: /var/run/dpdk/spdk_pid56802 00:38:03.235 Removing: /var/run/dpdk/spdk_pid59479 00:38:03.235 Removing: /var/run/dpdk/spdk_pid60119 00:38:03.235 Removing: /var/run/dpdk/spdk_pid61008 00:38:03.235 Removing: /var/run/dpdk/spdk_pid61637 00:38:03.235 Removing: /var/run/dpdk/spdk_pid63094 00:38:03.235 Removing: /var/run/dpdk/spdk_pid6726 00:38:03.235 Removing: /var/run/dpdk/spdk_pid72432 00:38:03.235 Removing: /var/run/dpdk/spdk_pid72529 00:38:03.235 Removing: /var/run/dpdk/spdk_pid79285 00:38:03.235 Removing: /var/run/dpdk/spdk_pid81805 00:38:03.235 Removing: /var/run/dpdk/spdk_pid84278 00:38:03.235 Removing: /var/run/dpdk/spdk_pid85778 00:38:03.235 Removing: /var/run/dpdk/spdk_pid88340 00:38:03.235 Removing: /var/run/dpdk/spdk_pid90001 00:38:03.235 Clean 00:38:03.235 killing process with pid 3732753 00:38:11.351 killing process with pid 3732750 00:38:11.612 killing process with pid 3732752 00:38:11.872 killing process with pid 3732751 00:38:11.872 04:35:26 -- common/autotest_common.sh@1436 -- # return 0 00:38:11.872 04:35:26 -- spdk/autotest.sh@387 -- # timing_exit post_cleanup 00:38:11.872 04:35:26 -- common/autotest_common.sh@718 -- # xtrace_disable 00:38:11.872 04:35:26 -- common/autotest_common.sh@10 -- # set +x 00:38:11.872 04:35:26 -- spdk/autotest.sh@389 -- # timing_exit autotest 00:38:11.872 04:35:26 -- common/autotest_common.sh@718 -- # xtrace_disable 00:38:11.872 04:35:26 -- common/autotest_common.sh@10 -- # set +x 00:38:11.872 04:35:26 -- spdk/autotest.sh@390 -- # chmod a+r /var/jenkins/workspace/dsa-phy-autotest/spdk/../output/timing.txt 00:38:11.872 04:35:26 -- spdk/autotest.sh@392 -- # [[ -f /var/jenkins/workspace/dsa-phy-autotest/spdk/../output/udev.log ]] 00:38:11.872 04:35:26 -- spdk/autotest.sh@392 -- # rm -f /var/jenkins/workspace/dsa-phy-autotest/spdk/../output/udev.log 00:38:11.872 04:35:26 -- spdk/autotest.sh@394 -- # hash lcov 00:38:11.872 04:35:26 -- spdk/autotest.sh@394 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:38:11.872 04:35:26 -- spdk/autotest.sh@396 -- # hostname 00:38:11.872 04:35:26 -- spdk/autotest.sh@396 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -d /var/jenkins/workspace/dsa-phy-autotest/spdk -t spdk-fcp-07 -o /var/jenkins/workspace/dsa-phy-autotest/spdk/../output/cov_test.info 00:38:12.132 geninfo: WARNING: invalid characters removed from testname! 00:38:34.123 04:35:46 -- spdk/autotest.sh@397 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -a /var/jenkins/workspace/dsa-phy-autotest/spdk/../output/cov_base.info -a /var/jenkins/workspace/dsa-phy-autotest/spdk/../output/cov_test.info -o /var/jenkins/workspace/dsa-phy-autotest/spdk/../output/cov_total.info 00:38:34.123 04:35:48 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/dsa-phy-autotest/spdk/../output/cov_total.info '*/dpdk/*' -o /var/jenkins/workspace/dsa-phy-autotest/spdk/../output/cov_total.info 00:38:35.496 04:35:49 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/dsa-phy-autotest/spdk/../output/cov_total.info '/usr/*' -o /var/jenkins/workspace/dsa-phy-autotest/spdk/../output/cov_total.info 00:38:36.869 04:35:51 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/dsa-phy-autotest/spdk/../output/cov_total.info '*/examples/vmd/*' -o /var/jenkins/workspace/dsa-phy-autotest/spdk/../output/cov_total.info 00:38:38.241 04:35:52 -- spdk/autotest.sh@401 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/dsa-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /var/jenkins/workspace/dsa-phy-autotest/spdk/../output/cov_total.info 00:38:39.181 04:35:53 -- spdk/autotest.sh@402 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/dsa-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /var/jenkins/workspace/dsa-phy-autotest/spdk/../output/cov_total.info 00:38:40.557 04:35:54 -- spdk/autotest.sh@403 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:38:40.557 04:35:54 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/common.sh 00:38:40.557 04:35:54 -- scripts/common.sh@433 -- $ [[ -e /bin/wpdk_common.sh ]] 00:38:40.557 04:35:54 -- scripts/common.sh@441 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:38:40.557 04:35:54 -- scripts/common.sh@442 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:38:40.557 04:35:54 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:40.558 04:35:54 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:40.558 04:35:54 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:40.558 04:35:54 -- paths/export.sh@5 -- $ export PATH 00:38:40.558 04:35:54 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:40.558 04:35:54 -- common/autobuild_common.sh@434 -- $ out=/var/jenkins/workspace/dsa-phy-autotest/spdk/../output 00:38:40.558 04:35:54 -- common/autobuild_common.sh@435 -- $ date +%s 00:38:40.558 04:35:54 -- common/autobuild_common.sh@435 -- $ mktemp -dt spdk_1715654154.XXXXXX 00:38:40.558 04:35:54 -- common/autobuild_common.sh@435 -- $ SPDK_WORKSPACE=/tmp/spdk_1715654154.77Hbxe 00:38:40.558 04:35:54 -- common/autobuild_common.sh@437 -- $ [[ -n '' ]] 00:38:40.558 04:35:54 -- common/autobuild_common.sh@441 -- $ '[' -n '' ']' 00:38:40.558 04:35:54 -- common/autobuild_common.sh@444 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/dsa-phy-autotest/spdk/dpdk/' 00:38:40.558 04:35:54 -- common/autobuild_common.sh@448 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/dsa-phy-autotest/spdk/xnvme --exclude /tmp' 00:38:40.558 04:35:54 -- common/autobuild_common.sh@450 -- $ scanbuild='scan-build -o /var/jenkins/workspace/dsa-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/dsa-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/dsa-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:38:40.558 04:35:54 -- common/autobuild_common.sh@451 -- $ get_config_params 00:38:40.558 04:35:54 -- common/autotest_common.sh@387 -- $ xtrace_disable 00:38:40.558 04:35:54 -- common/autotest_common.sh@10 -- $ set +x 00:38:40.558 04:35:55 -- common/autobuild_common.sh@451 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk' 00:38:40.558 04:35:55 -- spdk/autopackage.sh@10 -- $ MAKEFLAGS=-j128 00:38:40.558 04:35:55 -- spdk/autopackage.sh@11 -- $ cd /var/jenkins/workspace/dsa-phy-autotest/spdk 00:38:40.558 04:35:55 -- spdk/autopackage.sh@13 -- $ [[ 0 -eq 1 ]] 00:38:40.558 04:35:55 -- spdk/autopackage.sh@18 -- $ [[ 1 -eq 0 ]] 00:38:40.558 04:35:55 -- spdk/autopackage.sh@18 -- $ [[ 0 -eq 0 ]] 00:38:40.558 04:35:55 -- spdk/autopackage.sh@19 -- $ timing_finish 00:38:40.558 04:35:55 -- common/autotest_common.sh@724 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:38:40.558 04:35:55 -- common/autotest_common.sh@725 -- $ '[' -x /usr/local/FlameGraph/flamegraph.pl ']' 00:38:40.558 04:35:55 -- common/autotest_common.sh@727 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /var/jenkins/workspace/dsa-phy-autotest/spdk/../output/timing.txt 00:38:40.558 04:35:55 -- spdk/autopackage.sh@20 -- $ exit 0 00:38:40.558 + [[ -n 3690416 ]] 00:38:40.558 + sudo kill 3690416 00:38:40.567 [Pipeline] } 00:38:40.584 [Pipeline] // stage 00:38:40.589 [Pipeline] } 00:38:40.605 [Pipeline] // timeout 00:38:40.610 [Pipeline] } 00:38:40.624 [Pipeline] // catchError 00:38:40.629 [Pipeline] } 00:38:40.645 [Pipeline] // wrap 00:38:40.650 [Pipeline] } 00:38:40.664 [Pipeline] // catchError 00:38:40.671 [Pipeline] stage 00:38:40.673 [Pipeline] { (Epilogue) 00:38:40.687 [Pipeline] catchError 00:38:40.689 [Pipeline] { 00:38:40.702 [Pipeline] echo 00:38:40.703 Cleanup processes 00:38:40.708 [Pipeline] sh 00:38:40.994 + sudo pgrep -af /var/jenkins/workspace/dsa-phy-autotest/spdk 00:38:40.994 121804 sudo pgrep -af /var/jenkins/workspace/dsa-phy-autotest/spdk 00:38:41.007 [Pipeline] sh 00:38:41.294 ++ sudo pgrep -af /var/jenkins/workspace/dsa-phy-autotest/spdk 00:38:41.294 ++ grep -v 'sudo pgrep' 00:38:41.294 ++ awk '{print $1}' 00:38:41.294 + sudo kill -9 00:38:41.294 + true 00:38:41.306 [Pipeline] sh 00:38:41.592 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:38:51.575 [Pipeline] sh 00:38:51.855 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:38:51.855 Artifacts sizes are good 00:38:51.867 [Pipeline] archiveArtifacts 00:38:51.873 Archiving artifacts 00:38:52.128 [Pipeline] sh 00:38:52.437 + sudo chown -R sys_sgci /var/jenkins/workspace/dsa-phy-autotest 00:38:52.450 [Pipeline] cleanWs 00:38:52.460 [WS-CLEANUP] Deleting project workspace... 00:38:52.460 [WS-CLEANUP] Deferred wipeout is used... 00:38:52.465 [WS-CLEANUP] done 00:38:52.467 [Pipeline] } 00:38:52.484 [Pipeline] // catchError 00:38:52.497 [Pipeline] sh 00:38:52.782 + logger -p user.info -t JENKINS-CI 00:38:52.791 [Pipeline] } 00:38:52.806 [Pipeline] // stage 00:38:52.811 [Pipeline] } 00:38:52.826 [Pipeline] // node 00:38:52.830 [Pipeline] End of Pipeline 00:38:52.851 Finished: SUCCESS